TU Darmstadt / ULB / TUbiblio

FLAIRS: FPGA-Accelerated Inference-Resistant & Secure Federated Learning

Li, Huimin ; Rieger, Phillip ; Zeitouni, Shaza ; Picek, Stjepan ; Sadeghi, Ahmad-Reza (2023)
FLAIRS: FPGA-Accelerated Inference-Resistant & Secure Federated Learning.
33rd International Conference on Field-Programmable Logic and Applications. Gothenburg, Sweden (04.09.2023-08.09.2023)
doi: htps://doi.org/10.1109/FPL60245.2023.00046
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Federated Learning (FL) has become very popular since it enables clients to train a joint model collaboratively without sharing their private data. However, FL has been shown to be susceptible to backdoor and inference attacks. While in the former, the adversary injects manipulated updates into the aggregation process; the latter leverages clients' local models to deduce their private data. Contemporary solutions to address the security concerns of FL are either impractical for real-world deployment due to high-performance overheads or are tailored towards addressing specific threats, for instance, privacy-preserving aggregation or backdoor defenses. Given these limitations, our research delves into the advantages of harnessing the FPGA-based computing paradigm to overcome performance bottlenecks of software-only solutions while mitigating backdoor and inference attacks. We utilize FPGA-based enclaves to address inference attacks during the aggregation process of FL. We adopt an advanced backdoor-aware aggregation algorithm on the FPGA to counter backdoor attacks. We implemented and evaluated our method on Xilinx VMK-180, yielding a significant speed-up of around 300 times on the IoT-Traffic dataset and more than 506 times on the CIFAR-10 dataset.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2023
Autor(en): Li, Huimin ; Rieger, Phillip ; Zeitouni, Shaza ; Picek, Stjepan ; Sadeghi, Ahmad-Reza
Art des Eintrags: Bibliographie
Titel: FLAIRS: FPGA-Accelerated Inference-Resistant & Secure Federated Learning
Sprache: Englisch
Publikationsjahr: 2 November 2023
Verlag: IEEE
Buchtitel: Proceedings of the 2023 33rd International Conference on Field-Programmable Logic and Applications (FPL)
Veranstaltungstitel: 33rd International Conference on Field-Programmable Logic and Applications
Veranstaltungsort: Gothenburg, Sweden
Veranstaltungsdatum: 04.09.2023-08.09.2023
DOI: htps://doi.org/10.1109/FPL60245.2023.00046
Zugehörige Links:
Kurzbeschreibung (Abstract):

Federated Learning (FL) has become very popular since it enables clients to train a joint model collaboratively without sharing their private data. However, FL has been shown to be susceptible to backdoor and inference attacks. While in the former, the adversary injects manipulated updates into the aggregation process; the latter leverages clients' local models to deduce their private data. Contemporary solutions to address the security concerns of FL are either impractical for real-world deployment due to high-performance overheads or are tailored towards addressing specific threats, for instance, privacy-preserving aggregation or backdoor defenses. Given these limitations, our research delves into the advantages of harnessing the FPGA-based computing paradigm to overcome performance bottlenecks of software-only solutions while mitigating backdoor and inference attacks. We utilize FPGA-based enclaves to address inference attacks during the aggregation process of FL. We adopt an advanced backdoor-aware aggregation algorithm on the FPGA to counter backdoor attacks. We implemented and evaluated our method on Xilinx VMK-180, yielding a significant speed-up of around 300 times on the IoT-Traffic dataset and more than 506 times on the CIFAR-10 dataset.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Systemsicherheit
Profilbereiche
Profilbereiche > Cybersicherheit (CYSEC)
Hinterlegungsdatum: 27 Nov 2023 14:53
Letzte Änderung: 27 Nov 2023 14:53
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen