TU Darmstadt / ULB / TUbiblio

Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups

Wainakh, Aidmar ; Zimmer, Ephraim ; Subedi, Sandeep ; Keim, Jens ; Grube, Tim ; Karuppayah, Shankar ; Sanchez Guinea, Alejandro ; Mühlhäuser, Max (2023)
Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups.
In: Sensors, 23 (1)
doi: 10.3390/s23010031
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

Deep learning pervades heavy data-driven disciplines in research and development. The Internet of Things and sensor systems, which enable smart environments and services, are settings where deep learning can provide invaluable utility. However, the data in these systems are very often directly or indirectly related to people, which raises privacy concerns. Federated learning (FL) mitigates some of these concerns and empowers deep learning in sensor-driven environments by enabling multiple entities to collaboratively train a machine learning model without sharing their data. Nevertheless, a number of works in the literature propose attacks that can manipulate the model and disclose information about the training data in FL. As a result, there has been a growing belief that FL is highly vulnerable to severe attacks. Although these attacks do indeed highlight security and privacy risks in FL, some of them may not be as effective in production deployment because they are feasible only given special—sometimes impractical—assumptions. In this paper, we investigate this issue by conducting a quantitative analysis of the attacks against FL and their evaluation settings in 48 papers. This analysis is the first of its kind to reveal several research gaps with regard to the types and architectures of target models. Additionally, the quantitative analysis allows us to highlight unrealistic assumptions in some attacks related to the hyper-parameters of the model and data distribution. Furthermore, we identify fallacies in the evaluation of attacks which raise questions about the generalizability of the conclusions. As a remedy, we propose a set of recommendations to promote adequate evaluations.

Typ des Eintrags: Artikel
Erschienen: 2023
Autor(en): Wainakh, Aidmar ; Zimmer, Ephraim ; Subedi, Sandeep ; Keim, Jens ; Grube, Tim ; Karuppayah, Shankar ; Sanchez Guinea, Alejandro ; Mühlhäuser, Max
Art des Eintrags: Bibliographie
Titel: Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups
Sprache: Englisch
Publikationsjahr: Januar 2023
Verlag: MDPI
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Sensors
Jahrgang/Volume einer Zeitschrift: 23
(Heft-)Nummer: 1
DOI: 10.3390/s23010031
URL / URN: https://www.mdpi.com/1424-8220/23/1/31
Kurzbeschreibung (Abstract):

Deep learning pervades heavy data-driven disciplines in research and development. The Internet of Things and sensor systems, which enable smart environments and services, are settings where deep learning can provide invaluable utility. However, the data in these systems are very often directly or indirectly related to people, which raises privacy concerns. Federated learning (FL) mitigates some of these concerns and empowers deep learning in sensor-driven environments by enabling multiple entities to collaboratively train a machine learning model without sharing their data. Nevertheless, a number of works in the literature propose attacks that can manipulate the model and disclose information about the training data in FL. As a result, there has been a growing belief that FL is highly vulnerable to severe attacks. Although these attacks do indeed highlight security and privacy risks in FL, some of them may not be as effective in production deployment because they are feasible only given special—sometimes impractical—assumptions. In this paper, we investigate this issue by conducting a quantitative analysis of the attacks against FL and their evaluation settings in 48 papers. This analysis is the first of its kind to reveal several research gaps with regard to the types and architectures of target models. Additionally, the quantitative analysis allows us to highlight unrealistic assumptions in some attacks related to the hyper-parameters of the model and data distribution. Furthermore, we identify fallacies in the evaluation of attacks which raise questions about the generalizability of the conclusions. As a remedy, we propose a set of recommendations to promote adequate evaluations.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Telekooperation
DFG-Graduiertenkollegs
DFG-Graduiertenkollegs > Graduiertenkolleg 2050 Privacy and Trust for Mobile Users
Hinterlegungsdatum: 21 Dez 2022 11:12
Letzte Änderung: 16 Jan 2023 14:53
PPN: 503681504
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen