Secure Federated Learning with Differential Privacy
Beschreibung
Federated learning is a machine learning paradigm that aims to learn collaboratively from decentralized private data owned by entities referred to as clients. However, due to its decentralized nature, federated learning is susceptible to model poisoning attacks, where malicious clients try to corrupt the learning process by modifying local model updates. Moreover, the updates sent by the clients might leak information about the private data involved in the learning. The goal of this work is to investigate and combine existing robust aggregation techniques in FL with differential privacy techniques.
References:
[1] - https://arxiv.org/pdf/2304.09762.pdf
[2] - https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9757841
[3] - https://dl.acm.org/doi/abs/10.1145/3465084.3467919
Voraussetzungen
- Basic knowledge about machine learning and gradient descent optimization
- First experience with machine learning in python
- Undergraduate statistics courses
- Prior knowledge about differential privacy is a plus