TU Darmstadt / ULB / TUbiblio

A Survey of Confidence Estimation and Calibration in Large Language Models

Geng, Jiahui ; Cai, Fengyu ; Wang, Yuxia ; Koeppl, Heinz ; Nakov, Preslav ; Gurevych, Iryna (2024)
A Survey of Confidence Estimation and Calibration in Large Language Models.
2024 Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico (17-21.06.2024)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks in various domains. Despite their impressive performance, they can be unreliable due to factual errors in their generations. Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations. There has been a lot of recent research aiming to address this, but there has been no comprehensive overview to organize it and to outline the main lessons learned. The present survey aims to bridge this gap. In particular, we outline the challenges and we summarize recent technical advancements for LLM confidence estimation and calibration. We further discuss their applications and suggest promising directions for future work.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2024
Autor(en): Geng, Jiahui ; Cai, Fengyu ; Wang, Yuxia ; Koeppl, Heinz ; Nakov, Preslav ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: A Survey of Confidence Estimation and Calibration in Large Language Models
Sprache: Englisch
Publikationsjahr: Juni 2024
Ort: Mexico City, Mexico
Verlag: Association for Computational Linguistics
Buchtitel: Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Veranstaltungstitel: 2024 Conference of the North American Chapter of the Association for Computational Linguistics
Veranstaltungsort: Mexico City, Mexico
Veranstaltungsdatum: 17-21.06.2024
URL / URN: https://aclanthology.org/2024.naacl-long.366/
Kurzbeschreibung (Abstract):

Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks in various domains. Despite their impressive performance, they can be unreliable due to factual errors in their generations. Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations. There has been a lot of recent research aiming to address this, but there has been no comprehensive overview to organize it and to outline the main lessons learned. The present survey aims to bridge this gap. In particular, we outline the challenges and we summarize recent technical advancements for LLM confidence estimation and calibration. We further discuss their applications and suggest promising directions for future work.

Freie Schlagworte: UKP_p_crisp_senpai, UKP_p_seditrah_QABioLit
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 24 Jun 2024 12:19
Letzte Änderung: 05 Aug 2024 08:34
PPN: 520325605
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen