Gao, Yang ; Zhao, Wei ; Eger, Steffen (2020)
SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization.
ACL'20: 58th Annual Meeting of the Association for Computational Linguistics. virtual Conference (05.07.2020-10.07.2020)
doi: 10.18653/v1/2020.acl-main.124
Konferenzveröffentlichung, Bibliographie
Kurzbeschreibung (Abstract)
We study unsupervised multi-document summarization evaluation metrics, which require neither human-written reference summaries nor human annotations (e.g. preferences, ratings, etc.). We propose SUPERT, which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques. Compared to the state-of-the-art unsupervised evaluation metrics, SUPERT correlates better with human ratings by 18- 39%. Furthermore, we use SUPERT as rewards to guide a neural-based reinforcement learning summarizer, yielding favorable performance compared to the state-of-the-art unsupervised summarizers. All source code is available at https://github.com/yg211/acl20-ref-free-eval.
Typ des Eintrags: | Konferenzveröffentlichung |
---|---|
Erschienen: | 2020 |
Autor(en): | Gao, Yang ; Zhao, Wei ; Eger, Steffen |
Art des Eintrags: | Bibliographie |
Titel: | SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization |
Sprache: | Englisch |
Publikationsjahr: | 2020 |
Ort: | Kerrville, TX 78028, USA |
Verlag: | Association for Computational Linguistics |
Titel der Zeitschrift, Zeitung oder Schriftenreihe: | Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics |
Buchtitel: | Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics |
Veranstaltungstitel: | ACL'20: 58th Annual Meeting of the Association for Computational Linguistics |
Veranstaltungsort: | virtual Conference |
Veranstaltungsdatum: | 05.07.2020-10.07.2020 |
DOI: | 10.18653/v1/2020.acl-main.124 |
Zugehörige Links: | |
Kurzbeschreibung (Abstract): | We study unsupervised multi-document summarization evaluation metrics, which require neither human-written reference summaries nor human annotations (e.g. preferences, ratings, etc.). We propose SUPERT, which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques. Compared to the state-of-the-art unsupervised evaluation metrics, SUPERT correlates better with human ratings by 18- 39%. Furthermore, we use SUPERT as rewards to guide a neural-based reinforcement learning summarizer, yielding favorable performance compared to the state-of-the-art unsupervised summarizers. All source code is available at https://github.com/yg211/acl20-ref-free-eval. |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Knowledge Engineering DFG-Graduiertenkollegs DFG-Graduiertenkollegs > Graduiertenkolleg 1994 Adaptive Informationsaufbereitung aus heterogenen Quellen |
Hinterlegungsdatum: | 02 Jun 2020 10:19 |
Letzte Änderung: | 19 Dez 2024 09:24 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |