TU Darmstadt / ULB / TUbiblio

Rediscovering Hashed Random Projections for Efficient Quantization of Contextualized Sentence Embeddings

Hamster, Ulf A. ; Lee, Ji-Ung ; Geyken, Alexander ; Gurevych, Iryna (2023)
Rediscovering Hashed Random Projections for Efficient Quantization of Contextualized Sentence Embeddings.
doi: 10.48550/arXiv.2304.02481
Report, Bibliographie

Kurzbeschreibung (Abstract)

Training and inference on edge devices often requires an efficient setup due to computational limitations. While pre-computing data representations and caching them on a server can mitigate extensive edge device computation, this leads to two challenges. First, the amount of storage required on the server that scales linearly with the number of instances. Second, the bandwidth required to send extensively large amounts of data to an edge device. To reduce the memory footprint of pre-computed data representations, we propose a simple, yet effective approach that uses randomly initialized hyperplane projections. To further reduce their size by up to 98.96%, we quantize the resulting floating-point representations into binary vectors. Despite the greatly reduced size, we show that the embeddings remain effective for training models across various English and German sentence classification tasks that retain 94%--99% of their floating-point.

Typ des Eintrags: Report
Erschienen: 2023
Autor(en): Hamster, Ulf A. ; Lee, Ji-Ung ; Geyken, Alexander ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: Rediscovering Hashed Random Projections for Efficient Quantization of Contextualized Sentence Embeddings
Sprache: Englisch
Publikationsjahr: 16 Mai 2023
Verlag: arXiv
Reihe: Computation and Language
Kollation: 14 Seiten
DOI: 10.48550/arXiv.2304.02481
URL / URN: https://arxiv.org/abs/2304.02481
Kurzbeschreibung (Abstract):

Training and inference on edge devices often requires an efficient setup due to computational limitations. While pre-computing data representations and caching them on a server can mitigate extensive edge device computation, this leads to two challenges. First, the amount of storage required on the server that scales linearly with the number of instances. Second, the bandwidth required to send extensively large amounts of data to an edge device. To reduce the memory footprint of pre-computed data representations, we propose a simple, yet effective approach that uses randomly initialized hyperplane projections. To further reduce their size by up to 98.96%, we quantize the resulting floating-point representations into binary vectors. Despite the greatly reduced size, we show that the embeddings remain effective for training models across various English and German sentence classification tasks that retain 94%--99% of their floating-point.

Freie Schlagworte: UKP_p_EVIDENCE
Zusätzliche Informationen:

2. Version

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
TU-Projekte: DFG|GU798/27-1|EVIDENCE: Computer-u
Hinterlegungsdatum: 12 Jun 2023 12:37
Letzte Änderung: 19 Dez 2024 11:35
PPN: 510471323
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen