TU Darmstadt / ULB / TUbiblio

FLGUARD: Secure and Private Federated Learning

Nguyen, Thien Duc ; Rieger, Phillip ; Yalame, Mohammad Hossein ; Möllering, Helen ; Fereidooni, Hossein ; Marchal, Samuel ; Miettinen, Markus ; Mirhoseini, Azalia ; Sadeghi, Ahmad-Reza ; Schneider, Thomas ; Zeitouni, Shaza (2021)
FLGUARD: Secure and Private Federated Learning.
Report, Bibliographie

Kurzbeschreibung (Abstract)

Recently, a number of backdoor attacks against Federated Learning (FL) have been proposed. In such attacks, an adversary injects poisoned model updates into the federated model aggregation process with the goal of manipulating the aggregated model to provide false predictions on specific adversary-chosen inputs. A number of defenses have been proposed; but none of them can effectively protect the FL process also against so-called multi-backdoor attacks in which multiple different backdoors are injected by the adversary simultaneously without severely impacting the benign performance of the aggregated model. To overcome this challenge, we introduce FLGUARD, a poisoning defense framework that is able to defend FL against state-of-the-art backdoor attacks while simultaneously maintaining the benign performance of the aggregated model. Moreover, FL is also vulnerable to inference attacks, in which a malicious aggregator can infer information about clients' training data from their model updates. To thwart such attacks, we augment FLGUARD with state-of-the-art secure computation techniques that securely evaluate the FLGUARD algorithm. We provide formal argumentation for the effectiveness of our FLGUARD and extensively evaluate it against known backdoor attacks on several datasets and applications (including image classification, word prediction, and IoT intrusion detection), demonstrating that FLGUARD can entirely remove backdoors with a negligible effect on accuracy. We also show that private FLGUARD achieves practical runtimes.

Typ des Eintrags: Report
Erschienen: 2021
Autor(en): Nguyen, Thien Duc ; Rieger, Phillip ; Yalame, Mohammad Hossein ; Möllering, Helen ; Fereidooni, Hossein ; Marchal, Samuel ; Miettinen, Markus ; Mirhoseini, Azalia ; Sadeghi, Ahmad-Reza ; Schneider, Thomas ; Zeitouni, Shaza
Art des Eintrags: Bibliographie
Titel: FLGUARD: Secure and Private Federated Learning
Sprache: Englisch
Publikationsjahr: 21 Januar 2021
Verlag: arXiv
Reihe: Crytography and Security
Auflage: 2. Version
URL / URN: https://arxiv.org/abs/2101.02281v2
Kurzbeschreibung (Abstract):

Recently, a number of backdoor attacks against Federated Learning (FL) have been proposed. In such attacks, an adversary injects poisoned model updates into the federated model aggregation process with the goal of manipulating the aggregated model to provide false predictions on specific adversary-chosen inputs. A number of defenses have been proposed; but none of them can effectively protect the FL process also against so-called multi-backdoor attacks in which multiple different backdoors are injected by the adversary simultaneously without severely impacting the benign performance of the aggregated model. To overcome this challenge, we introduce FLGUARD, a poisoning defense framework that is able to defend FL against state-of-the-art backdoor attacks while simultaneously maintaining the benign performance of the aggregated model. Moreover, FL is also vulnerable to inference attacks, in which a malicious aggregator can infer information about clients' training data from their model updates. To thwart such attacks, we augment FLGUARD with state-of-the-art secure computation techniques that securely evaluate the FLGUARD algorithm. We provide formal argumentation for the effectiveness of our FLGUARD and extensively evaluate it against known backdoor attacks on several datasets and applications (including image classification, word prediction, and IoT intrusion detection), demonstrating that FLGUARD can entirely remove backdoors with a negligible effect on accuracy. We also show that private FLGUARD achieves practical runtimes.

Freie Schlagworte: Solutions, S2
Zusätzliche Informationen:

Preprint

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Systemsicherheit
DFG-Sonderforschungsbereiche (inkl. Transregio)
DFG-Sonderforschungsbereiche (inkl. Transregio) > Sonderforschungsbereiche
Profilbereiche
Profilbereiche > Cybersicherheit (CYSEC)
DFG-Sonderforschungsbereiche (inkl. Transregio) > Sonderforschungsbereiche > SFB 1119: CROSSING – Kryptographiebasierte Sicherheitslösungen als Grundlage für Vertrauen in heutigen und zukünftigen IT-Systemen
Hinterlegungsdatum: 29 Mär 2021 10:03
Letzte Änderung: 11 Jul 2024 10:03
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen