Privacy Preserving Federated Learning using Multi-Party Homomorphic Encryption
Federated learning (FL) is a distributed training approach in machine learning where the training data resides on different client nodes and communication happens via a central node. FL has been shown to leak sensitive information of user data even though it does not explicitly require the client nodes to transmit data samples to the central node. In this work we show how homomorphic encryption (HE) techniques can be applied to prevent the leakage of sensitive information in FL. We first introduce the problem of secure aggregation during the training phase in FL. In this setting, each client node has a gradient vector that must be encrypted and transmitted to a central node that must be able to recover the aggregate sum. We explain how a variant of homomorphic encryption called multi-party homomorphic encryption that facilitate such a computation in a fully distributed manner, without requiring access to any trusted third party for key management. We then present an extension of MPHE that enables gradient aggregation even when a subset of client nodes drop out, a scenario that is common in FL applications. We further present how secure gradient aggregation can be combined with gradient compression to improve both communication and computation efficiencies. We then consider the inference step in FL and discuss applications of MPHE in two settings (1) secure inference in vertical federated learning and (2) secure inference in split learning.
About Speaker:
Ashish Khisti is a Professor at the University of Toronto in the Department of Electrical and Computer Engineering (Faculty of Applied Sciences and Engineering) as well as a faculty affiliate with the Vector Institute of Artificial Intelligence. He received his BASc degree from the Engineering Science program at University of Toronto and his MASc and PhD degrees from the EECS department at the Massachusetts Institute of Technology. He is a recipient of a number of awards including the IEEE Information Theory Society Distinguished Lecturer Award, Tier II Canada Research Chair and an Ontario Early Researcher Award. His research interests include information theoretic aspects of machine learning including speculative decoding for large language models, privacy and security of machine learning and neural data compression.