TU Darmstadt / ULB / TUbiblio

Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval

Geigle, Gregor ; Pfeiffer, Jonas ; Reimers, Nils ; Vulić, Ivan ; Gurevych, Iryna (2022)
Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval.
In: Transactions of the Association for Computational Linguistics, 10
doi: 10.1162/tacl_a_00473
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

Current state-of-the-art approaches to cross- modal retrieval process text and visual input jointly, relying on Transformer-based architectures with cross-attention mechanisms that attend over all words and objects in an image. While offering unmatched retrieval performance, such models: 1) are typically pretrained from scratch and thus less scalable, 2) suffer from huge retrieval latency and inefficiency issues, which makes them impractical in realistic applications. To address these crucial gaps towards both improved and efficient cross- modal retrieval, we propose a novel fine-tuning framework that turns any pretrained text-image multi-modal model into an efficient retrieval model. The framework is based on a cooperative retrieve-and-rerank approach that combines: 1) twin networks (i.e., a bi-encoder) to separately encode all items of a corpus, enabling efficient initial retrieval, and 2) a cross-encoder component for a more nuanced (i.e., smarter) ranking of the retrieved small set of items. We also propose to jointly fine- tune the two components with shared weights, yielding a more parameter-efficient model. Our experiments on a series of standard cross-modal retrieval benchmarks in monolingual, multilingual, and zero-shot setups, demonstrate improved accuracy and huge efficiency benefits over the state-of-the-art cross- encoders.

Typ des Eintrags: Artikel
Erschienen: 2022
Autor(en): Geigle, Gregor ; Pfeiffer, Jonas ; Reimers, Nils ; Vulić, Ivan ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval
Sprache: Englisch
Publikationsjahr: 4 Mai 2022
Verlag: MIT Press
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Transactions of the Association for Computational Linguistics
Jahrgang/Volume einer Zeitschrift: 10
DOI: 10.1162/tacl_a_00473
URL / URN: https://transacl.org/ojs/index.php/tacl/article/view/3383
Zugehörige Links:
Kurzbeschreibung (Abstract):

Current state-of-the-art approaches to cross- modal retrieval process text and visual input jointly, relying on Transformer-based architectures with cross-attention mechanisms that attend over all words and objects in an image. While offering unmatched retrieval performance, such models: 1) are typically pretrained from scratch and thus less scalable, 2) suffer from huge retrieval latency and inefficiency issues, which makes them impractical in realistic applications. To address these crucial gaps towards both improved and efficient cross- modal retrieval, we propose a novel fine-tuning framework that turns any pretrained text-image multi-modal model into an efficient retrieval model. The framework is based on a cooperative retrieve-and-rerank approach that combines: 1) twin networks (i.e., a bi-encoder) to separately encode all items of a corpus, enabling efficient initial retrieval, and 2) a cross-encoder component for a more nuanced (i.e., smarter) ranking of the retrieved small set of items. We also propose to jointly fine- tune the two components with shared weights, yielding a more parameter-efficient model. Our experiments on a series of standard cross-modal retrieval benchmarks in monolingual, multilingual, and zero-shot setups, demonstrate improved accuracy and huge efficiency benefits over the state-of-the-art cross- encoders.

Freie Schlagworte: UKP_p_emergencity, UKP_p_DIP, UKP_p_seditrah_factcheck, emergenCITY_INF
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
LOEWE
LOEWE > LOEWE-Zentren
LOEWE > LOEWE-Zentren > emergenCITY
Hinterlegungsdatum: 04 Jan 2022 08:05
Letzte Änderung: 27 Okt 2022 09:26
PPN:
Zugehörige Links:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen