TU Darmstadt / ULB / TUbiblio

CrowdGuard: Federated Backdoor Detection in Federated Learning

Rieger, Phillip ; Krauß, Torsten ; Miettinen, Mark ; Dmitrienko, Alexandra ; Sadeghi, Ahmad-Reza (2024)
CrowdGuard: Federated Backdoor Detection in Federated Learning.
Network and Distributed Systems Security (NDSS) Symposium 2024. San Diego, USA (26.02.24-01.03.24)
doi: 10.14722/ndss.2024.23233
Conference or Workshop Item, Bibliographie

Abstract

Federated Learning (FL) is a promising approach enabling multiple clients to train Deep Neural Networks (DNNs) collaboratively without sharing their local training data. However, FL is susceptible to backdoor (or targeted poisoning) attacks. These attacks are initiated by malicious clients who seek to compromise the learning process by introducing specific behaviors into the learned model that can be triggered by carefully crafted inputs. Existing FL safeguards have various limitations: They are restricted to specific data distributions or reduce the global model accuracy due to excluding benign models or adding noise, are vulnerable to adaptive defense-aware adversaries, or require the server to access local models, allowing data inference attacks.

This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in FL and overcomes the deficiencies of existing techniques. It leverages clients' feedback on individual models, analyzes the behavior of neurons in hidden layers, and eliminates poisoned models through an iterative pruning scheme. CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback. The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios, including IID and non-IID data distributions. Additionally, CrowdGuard withstands adaptive adversaries while preserving the original performance of protected models. To ensure confidentiality, CrowdGuard uses a secure and privacy-preserving architecture leveraging Trusted Execution Environments (TEEs) on both client and server sides.

Item Type: Conference or Workshop Item
Erschienen: 2024
Creators: Rieger, Phillip ; Krauß, Torsten ; Miettinen, Mark ; Dmitrienko, Alexandra ; Sadeghi, Ahmad-Reza
Type of entry: Bibliographie
Title: CrowdGuard: Federated Backdoor Detection in Federated Learning
Language: English
Date: 26 February 2024
Place of Publication: San Diego, USA
Book Title: Network and Distributed Systems Security (NDSS) Symposium 2024
Event Title: Network and Distributed Systems Security (NDSS) Symposium 2024
Event Location: San Diego, USA
Event Dates: 26.02.24-01.03.24
DOI: 10.14722/ndss.2024.23233
URL / URN: https://www.ndss-symposium.org/ndss-paper/crowdguard-federat...
Abstract:

Federated Learning (FL) is a promising approach enabling multiple clients to train Deep Neural Networks (DNNs) collaboratively without sharing their local training data. However, FL is susceptible to backdoor (or targeted poisoning) attacks. These attacks are initiated by malicious clients who seek to compromise the learning process by introducing specific behaviors into the learned model that can be triggered by carefully crafted inputs. Existing FL safeguards have various limitations: They are restricted to specific data distributions or reduce the global model accuracy due to excluding benign models or adding noise, are vulnerable to adaptive defense-aware adversaries, or require the server to access local models, allowing data inference attacks.

This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in FL and overcomes the deficiencies of existing techniques. It leverages clients' feedback on individual models, analyzes the behavior of neurons in hidden layers, and eliminates poisoned models through an iterative pruning scheme. CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback. The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios, including IID and non-IID data distributions. Additionally, CrowdGuard withstands adaptive adversaries while preserving the original performance of protected models. To ensure confidentiality, CrowdGuard uses a secure and privacy-preserving architecture leveraging Trusted Execution Environments (TEEs) on both client and server sides.

Divisions: 20 Department of Computer Science
20 Department of Computer Science > System Security Lab
Profile Areas
Profile Areas > Cybersecurity (CYSEC)
Date Deposited: 18 Jun 2024 07:21
Last Modified: 18 Jun 2024 07:48
PPN: 519210689
Export:
Suche nach Titel in: TUfind oder in Google
Send an inquiry Send an inquiry

Options (only for editors)
Show editorial Details Show editorial Details