TU Darmstadt / ULB / TUbiblio

One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks

Geigle, Gregor ; Liu, Chen ; Pfeiffer, Jonas ; Gurevych, Iryna (2023)
One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks.
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Current multimodal models, aimed at solving Vision and Language (V+L) tasks, predominantly repurpose Vision Encoders (VE) as feature extractors. While many VEs—of different architectures, trained on different data and objectives—are publicly available, they are not designed for the downstream V+L tasks. Nonetheless, most current work assumes that a single pre-trained VE can serve as a general-purpose encoder. In this work, we focus on analysis and aim to understand whether the information stored within different VEs is complementary, i.e. if providing the model with features from multiple VEs can improve the performance on a target task, and how they are combined. We exhaustively experiment with three popular VEs on six downstream V+L tasks and analyze the attention and VE-dropout patterns. Our analyses suggest that diverse VEs complement each other, resulting in improved downstream V+L task performance, where the improvements are not due to simple ensemble effects (i.e. the performance does not always improve when increasing the number of encoders). We demonstrate that future VEs, which are not repurposed, but explicitly designed for V+L tasks, have the potential of improving performance on the target V+L tasks.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2023
Autor(en): Geigle, Gregor ; Liu, Chen ; Pfeiffer, Jonas ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks
Sprache: Englisch
Publikationsjahr: 10 Juli 2023
Ort: Toronto, Canada
Verlag: Association for Computational Linguistics
Buchtitel: Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)
URL / URN: https://aclanthology.org/2023.repl4nlp-1.9/
Kurzbeschreibung (Abstract):

Current multimodal models, aimed at solving Vision and Language (V+L) tasks, predominantly repurpose Vision Encoders (VE) as feature extractors. While many VEs—of different architectures, trained on different data and objectives—are publicly available, they are not designed for the downstream V+L tasks. Nonetheless, most current work assumes that a single pre-trained VE can serve as a general-purpose encoder. In this work, we focus on analysis and aim to understand whether the information stored within different VEs is complementary, i.e. if providing the model with features from multiple VEs can improve the performance on a target task, and how they are combined. We exhaustively experiment with three popular VEs on six downstream V+L tasks and analyze the attention and VE-dropout patterns. Our analyses suggest that diverse VEs complement each other, resulting in improved downstream V+L task performance, where the improvements are not due to simple ensemble effects (i.e. the performance does not always improve when increasing the number of encoders). We demonstrate that future VEs, which are not repurposed, but explicitly designed for V+L tasks, have the potential of improving performance on the target V+L tasks.

Freie Schlagworte: UKP_p_MISRIK,UKP_p_LOEWE_Spitzenprofessur
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 30 Aug 2023 11:22
Letzte Änderung: 30 Aug 2023 11:36
PPN: 511163908
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen