TU Darmstadt / ULB / TUbiblio

Triple-Encoders: Representations That Fire Together, Wire Together

Erker, Justus-Jonas ; Mai, Florian ; Reimers, Nils ; Spanakis, Gerasimos ; Gurevych, Iryna (2024)
Triple-Encoders: Representations That Fire Together, Wire Together.
62nd Annual Meeting of the Association for Computational Linguistics. Bangkok, Thailand (11.08.2024 - 16.08.2024)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Search-based dialog models typically re-encode the dialog history at every turn, incurring high cost.Curved Contrastive Learning, a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder, has recently shown promising results for dialog modeling at far superior efficiency.While high efficiency is achieved through independently encoding utterances, this ignores the importance of contextualization. To overcome this issue, this study introduces triple-encoders, which efficiently compute distributed utterance mixtures from these independently encoded utterances through a novel hebbian inspired co-occurrence learning objective in a self-organizing manner, without using any weights, i.e., merely through local interactions. Empirically, we find that triple-encoders lead to a substantial improvement over bi-encoders, and even to better zero-shot generalization than single-vector representation models without requiring re-encoding. Our code (https://github.com/UKPLab/acl2024-triple-encoders) and model (https://huggingface.co/UKPLab/triple-encoders-dailydialog) are publicly available.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2024
Autor(en): Erker, Justus-Jonas ; Mai, Florian ; Reimers, Nils ; Spanakis, Gerasimos ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: Triple-Encoders: Representations That Fire Together, Wire Together
Sprache: Englisch
Publikationsjahr: August 2024
Verlag: ACL
Buchtitel: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Veranstaltungstitel: 62nd Annual Meeting of the Association for Computational Linguistics
Veranstaltungsort: Bangkok, Thailand
Veranstaltungsdatum: 11.08.2024 - 16.08.2024
URL / URN: https://aclanthology.org/2024.acl-long.290/
Kurzbeschreibung (Abstract):

Search-based dialog models typically re-encode the dialog history at every turn, incurring high cost.Curved Contrastive Learning, a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder, has recently shown promising results for dialog modeling at far superior efficiency.While high efficiency is achieved through independently encoding utterances, this ignores the importance of contextualization. To overcome this issue, this study introduces triple-encoders, which efficiently compute distributed utterance mixtures from these independently encoded utterances through a novel hebbian inspired co-occurrence learning objective in a self-organizing manner, without using any weights, i.e., merely through local interactions. Empirically, we find that triple-encoders lead to a substantial improvement over bi-encoders, and even to better zero-shot generalization than single-vector representation models without requiring re-encoding. Our code (https://github.com/UKPLab/acl2024-triple-encoders) and model (https://huggingface.co/UKPLab/triple-encoders-dailydialog) are publicly available.

Freie Schlagworte: UKP_p_LOEWE_Spitzenprofessur, UKP_p_code_transformers, UKP_p_privacy_in_texts
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 20 Aug 2024 08:58
Letzte Änderung: 26 Nov 2024 09:11
PPN: 524121656
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen