TU Darmstadt / ULB / TUbiblio

Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data

Baumgärtner, Tim ; Gao, Yang ; Alon, Dana ; Metzler, Donald (2024)
Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data.
1st Conference on Language Modeling. Philadelphia, USA (07.10.2024 - 09.10.2024)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Reinforcement Learning from Human Feedback (RLHF) is a popular method for aligning Language Models (LM) with human values and preferences. RLHF requires a large number of preference pairs as training data, which are often used in both the Supervised Fine-Tuning and Reward Model training and therefore publicly available datasets are commonly used. In this work, we study to what extent a malicious actor can manipulate the LMs generations by poisoning the preferences, i.e., injecting poisonous preference pairs into these datasets and the RLHF training process. We propose strategies to build poisonous preference pairs and test their performance by poisoning two widely used preference datasets. Our results show that preference poisoning is highly effective: injecting a small amount of poisonous data (1-5% of the original dataset), we can effectively manipulate the LM to generate a target entity in a target sentiment (positive or negative). The findings from our experiments also shed light on strategies to defend against the preference poisoning attack.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2024
Autor(en): Baumgärtner, Tim ; Gao, Yang ; Alon, Dana ; Metzler, Donald
Art des Eintrags: Bibliographie
Titel: Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data
Sprache: Englisch
Publikationsjahr: 18 Oktober 2024
Veranstaltungstitel: 1st Conference on Language Modeling
Veranstaltungsort: Philadelphia, USA
Veranstaltungsdatum: 07.10.2024 - 09.10.2024
URL / URN: https://openreview.net/forum?id=v74mJURD1L#discussion
Kurzbeschreibung (Abstract):

Reinforcement Learning from Human Feedback (RLHF) is a popular method for aligning Language Models (LM) with human values and preferences. RLHF requires a large number of preference pairs as training data, which are often used in both the Supervised Fine-Tuning and Reward Model training and therefore publicly available datasets are commonly used. In this work, we study to what extent a malicious actor can manipulate the LMs generations by poisoning the preferences, i.e., injecting poisonous preference pairs into these datasets and the RLHF training process. We propose strategies to build poisonous preference pairs and test their performance by poisoning two widely used preference datasets. Our results show that preference poisoning is highly effective: injecting a small amount of poisonous data (1-5% of the original dataset), we can effectively manipulate the LM to generate a target entity in a target sentiment (positive or negative). The findings from our experiments also shed light on strategies to defend against the preference poisoning attack.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 25 Okt 2024 14:21
Letzte Änderung: 25 Okt 2024 14:21
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen