TU Darmstadt / ULB / TUbiblio

MAD-G: Multilingual Adapter Generation for Efficient Cross-Lingual Transfer

Ansell, Alan ; Edoardo, Maria Ponti ; Pfeiffer, Jonas ; Ruder, Sebastian ; Glavaš, Goran ; Vulić, Ivan ; Korhonen, Anna (2021)
MAD-G: Multilingual Adapter Generation for Efficient Cross-Lingual Transfer.
Conference on Empirical Methods in Natural Language Processing (EMNLP 2021). Punta Cana, Dominican Republic (07.11.2021-11.11.2021)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Adapter modules have emerged as a general parameter-efficient means to specialize a pretrained encoder to new domains. Massively multilingual transformers (MMTs) have particularly benefited from additional training of language-specific adapters. However, this approach is not viable for the vast majority of languages, due to limitations in their corpus size or compute budgets. In this work, we propose MAD-G (Multilingual ADapter Generation), which contextually generates language adapters from language representations based on typological features. In contrast to prior work, our time- and space-efficient MAD-G approach enables (1) sharing of linguistic knowledge across languages and (2) zero-shot inference by generating language adapters for unseen languages. We thoroughly evaluate MAD-G in zero-shot cross-lingual transfer on part-of-speech tagging, dependency parsing, and named entity recognition. While offering (1) improved fine-tuning efficiency (by a factor of around 50 in our experiments), (2) a smaller parameter budget, and (3) increased language coverage, MAD-G remains competitive with more expensive methods for language-specific adapter training across the board. Moreover, it offers substantial benefits for low-resource languages, particularly on the NER task in low-resource African languages. Finally, we demonstrate that MAD-G’s transfer performance can be further improved via: (i) multi-source training, i.e., by generating and combining adapters of multiple languages with available task-specific training data; and (ii) by further fine-tuning generated MAD-G adapters for languages with monolingual data.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2021
Autor(en): Ansell, Alan ; Edoardo, Maria Ponti ; Pfeiffer, Jonas ; Ruder, Sebastian ; Glavaš, Goran ; Vulić, Ivan ; Korhonen, Anna
Art des Eintrags: Bibliographie
Titel: MAD-G: Multilingual Adapter Generation for Efficient Cross-Lingual Transfer
Sprache: Englisch
Publikationsjahr: 7 November 2021
Verlag: ACL
Buchtitel: Findings of the Association for Computational Linguistics: EMNLP 2021
Veranstaltungstitel: Conference on Empirical Methods in Natural Language Processing (EMNLP 2021)
Veranstaltungsort: Punta Cana, Dominican Republic
Veranstaltungsdatum: 07.11.2021-11.11.2021
URL / URN: https://aclanthology.org/2021.findings-emnlp.410/
Kurzbeschreibung (Abstract):

Adapter modules have emerged as a general parameter-efficient means to specialize a pretrained encoder to new domains. Massively multilingual transformers (MMTs) have particularly benefited from additional training of language-specific adapters. However, this approach is not viable for the vast majority of languages, due to limitations in their corpus size or compute budgets. In this work, we propose MAD-G (Multilingual ADapter Generation), which contextually generates language adapters from language representations based on typological features. In contrast to prior work, our time- and space-efficient MAD-G approach enables (1) sharing of linguistic knowledge across languages and (2) zero-shot inference by generating language adapters for unseen languages. We thoroughly evaluate MAD-G in zero-shot cross-lingual transfer on part-of-speech tagging, dependency parsing, and named entity recognition. While offering (1) improved fine-tuning efficiency (by a factor of around 50 in our experiments), (2) a smaller parameter budget, and (3) increased language coverage, MAD-G remains competitive with more expensive methods for language-specific adapter training across the board. Moreover, it offers substantial benefits for low-resource languages, particularly on the NER task in low-resource African languages. Finally, we demonstrate that MAD-G’s transfer performance can be further improved via: (i) multi-source training, i.e., by generating and combining adapters of multiple languages with available task-specific training data; and (ii) by further fine-tuning generated MAD-G adapters for languages with monolingual data.

Freie Schlagworte: UKP_p_emergencity, emergenCITY_INF
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
LOEWE
LOEWE > LOEWE-Zentren
LOEWE > LOEWE-Zentren > emergenCITY
Hinterlegungsdatum: 22 Dez 2021 11:13
Letzte Änderung: 28 Feb 2023 13:53
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen