TU Darmstadt / ULB / TUbiblio

The Devil is in the Details: On Models and Training Regimes for Few-Shot Intent Classification

Mesgar, Mohsen ; Tran, Thy Thy ; Glavaš, Goran ; Gurevych, Iryna (2023)
The Devil is in the Details: On Models and Training Regimes for Few-Shot Intent Classification.
17th Conference of the European Chapter of the Association for Computational Linguistics. Dubrovnik, Croatia (02.-06.05.2023)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

In task-oriented dialog (ToD) new intents emerge on regular basis, with a handful of available utterances at best. This renders effective Few-Shot Intent Classification (FSIC) a central challenge for modular ToD systems. Recent FSIC methods appear to be similar: they use pretrained language models (PLMs) to encode utterances and predominantly resort to nearest-neighbor-based inference. However, they also differ in major components: they start from different PLMs, use different encoding architectures and utterance similarity functions, and adopt different training regimes.Coupling of these vital components together with the lack of informative ablations prevents the identification of factors that drive the (reported) FSIC performance. We propose a unified framework to evaluate these components along the following key dimensions:(1) Encoding architectures: Cross-Encoder vs Bi-Encoders;(2) Similarity function: Parameterized (i.e., trainable) vs non-parameterized; (3) Training regimes: Episodic meta-learning vs conventional (i.e., non-episodic) training. Our experimental results on seven FSIC benchmarks reveal three new important findings. First, the unexplored combination of cross-encoder architecture and episodic meta-learning consistently yields the best FSIC performance. Second, episodic training substantially outperforms its non-episodic counterpart. Finally, we show that splitting episodes into support and query sets has a limited and inconsistent effect on performance. Our findings show the importance of ablations and fair comparisons in FSIC. We publicly release our code and data.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2023
Autor(en): Mesgar, Mohsen ; Tran, Thy Thy ; Glavaš, Goran ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: The Devil is in the Details: On Models and Training Regimes for Few-Shot Intent Classification
Sprache: Englisch
Publikationsjahr: 2 Mai 2023
Verlag: ACL
Buchtitel: The 17th Conference of the European Chapter of the Association for Computational Linguistics - proceedings of the conference
Veranstaltungstitel: 17th Conference of the European Chapter of the Association for Computational Linguistics
Veranstaltungsort: Dubrovnik, Croatia
Veranstaltungsdatum: 02.-06.05.2023
URL / URN: https://aclanthology.org/2023.eacl-main.135/
Kurzbeschreibung (Abstract):

In task-oriented dialog (ToD) new intents emerge on regular basis, with a handful of available utterances at best. This renders effective Few-Shot Intent Classification (FSIC) a central challenge for modular ToD systems. Recent FSIC methods appear to be similar: they use pretrained language models (PLMs) to encode utterances and predominantly resort to nearest-neighbor-based inference. However, they also differ in major components: they start from different PLMs, use different encoding architectures and utterance similarity functions, and adopt different training regimes.Coupling of these vital components together with the lack of informative ablations prevents the identification of factors that drive the (reported) FSIC performance. We propose a unified framework to evaluate these components along the following key dimensions:(1) Encoding architectures: Cross-Encoder vs Bi-Encoders;(2) Similarity function: Parameterized (i.e., trainable) vs non-parameterized; (3) Training regimes: Episodic meta-learning vs conventional (i.e., non-episodic) training. Our experimental results on seven FSIC benchmarks reveal three new important findings. First, the unexplored combination of cross-encoder architecture and episodic meta-learning consistently yields the best FSIC performance. Second, episodic training substantially outperforms its non-episodic counterpart. Finally, we show that splitting episodes into support and query sets has a limited and inconsistent effect on performance. Our findings show the importance of ablations and fair comparisons in FSIC. We publicly release our code and data.

Freie Schlagworte: UKP_p_square,UKP_p_SERMAS
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 12 Jun 2023 12:29
Letzte Änderung: 09 Aug 2023 12:42
PPN: 51046971X
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen