TU Darmstadt / ULB / TUbiblio

FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks

Castillo, Jorge ; Rieger, Phillip ; Fereidooni, Hossein ; Chen, Qian ; Sadeghi, Ahmad-Reza (2023)
FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks.
39th Annual Computer Security Applications Conference (ACSAC'23). Austin, USA (04.12.2023 - 08.12.2023)
doi: 10.1145/3627106.3627194
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Federated learning (FL) is a distributed learning process that uses a trusted aggregation server to allow multiple parties (or clients) to collaboratively train a machine learning model without having them share their private data. Recent research, however, has demonstrated the effectiveness of inference and poisoning attacks on FL. Mitigating both attacks simultaneously is very challenging. State-of-the-art solutions have proposed the use of poisoning defenses with Secure Multi-Party Computation (SMPC) and/or Differential Privacy (DP). However, these techniques are not efficient and fail to address the malicious intent behind the attacks, i.e., adversaries (curious servers and/or compromised clients) seek to exploit a system for monetization purposes. To overcome these limitations, we present a ledger-based FL framework known as FLEDGE that allows making parties accountable for their behavior and achieve reasonable efficiency for mitigating inference and poisoning attacks. Our solution leverages crypto-currency to increase party accountability by penalizing malicious behavior and rewarding benign conduct. We conduct an extensive evaluation on four public datasets: Reddit, MNIST, Fashion-MNIST, and CIFAR-10. Our experimental results demonstrate that (1) FLEDGE provides strong privacy guarantees for model updates without sacrificing model utility; (2) FLEDGE can successfully mitigate different poisoning attacks without degrading the performance of the global model; and (3) FLEDGE offers unique reward mechanisms to promote benign behavior during model training and/or model aggregation.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2023
Autor(en): Castillo, Jorge ; Rieger, Phillip ; Fereidooni, Hossein ; Chen, Qian ; Sadeghi, Ahmad-Reza
Art des Eintrags: Bibliographie
Titel: FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks
Sprache: Englisch
Publikationsjahr: 4 Dezember 2023
Verlag: ACM
Buchtitel: ACSAC '23: Proceedings of the 39th Annual Computer Security Applications Conference
Veranstaltungstitel: 39th Annual Computer Security Applications Conference (ACSAC'23)
Veranstaltungsort: Austin, USA
Veranstaltungsdatum: 04.12.2023 - 08.12.2023
DOI: 10.1145/3627106.3627194
URL / URN: https://dl.acm.org/doi/abs/10.1145/3627106.3627194
Kurzbeschreibung (Abstract):

Federated learning (FL) is a distributed learning process that uses a trusted aggregation server to allow multiple parties (or clients) to collaboratively train a machine learning model without having them share their private data. Recent research, however, has demonstrated the effectiveness of inference and poisoning attacks on FL. Mitigating both attacks simultaneously is very challenging. State-of-the-art solutions have proposed the use of poisoning defenses with Secure Multi-Party Computation (SMPC) and/or Differential Privacy (DP). However, these techniques are not efficient and fail to address the malicious intent behind the attacks, i.e., adversaries (curious servers and/or compromised clients) seek to exploit a system for monetization purposes. To overcome these limitations, we present a ledger-based FL framework known as FLEDGE that allows making parties accountable for their behavior and achieve reasonable efficiency for mitigating inference and poisoning attacks. Our solution leverages crypto-currency to increase party accountability by penalizing malicious behavior and rewarding benign conduct. We conduct an extensive evaluation on four public datasets: Reddit, MNIST, Fashion-MNIST, and CIFAR-10. Our experimental results demonstrate that (1) FLEDGE provides strong privacy guarantees for model updates without sacrificing model utility; (2) FLEDGE can successfully mitigate different poisoning attacks without degrading the performance of the global model; and (3) FLEDGE offers unique reward mechanisms to promote benign behavior during model training and/or model aggregation.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Systemsicherheit
Profilbereiche
Profilbereiche > Cybersicherheit (CYSEC)
Hinterlegungsdatum: 20 Jun 2024 13:55
Letzte Änderung: 20 Jun 2024 13:55
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen