Hosseini, Mohammad Javad ; Gao, Yang ; Baumgärtner, Tim ; Fabrikant, Alex ; Amplayo, Reinald Kim (2024)
Scalable and Domain-General Abstractive Proposition Segmentation.
29th Conference on Empirical Methods in Natural Language Processing. Miami, USA (12.11.2024 - 16.11.2024)
doi: 10.18653/v1/2024.findings-emnlp.517
Konferenzveröffentlichung, Bibliographie
Kurzbeschreibung (Abstract)
Segmenting text into fine-grained units of meaning is important to a wide range of NLP applications. The default approach of segmenting text into sentences is often insufficient, especially since sentences are usually complex enough to include multiple units of meaning that merit separate treatment in the downstream task. We focus on the task of abstractive proposition segmentation (APS): transforming text into simple, self-contained, well-formed sentences. Several recent works have demonstrated the utility of proposition segmentation with few-shot prompted LLMs for downstream tasks such as retrieval-augmented grounding and fact verification. However, this approach does not scale to large amounts of text and may not always extract all the facts from the input text.In this paper, we first introduce evaluation metrics for the task to measure several dimensions of quality.We then propose a scalable, yet accurate, proposition segmentation model. We model proposition segmentation as a supervised task by training LLMs on existing annotated datasets and show that training yields significantly improved results. We further show that by using the fine-tuned LLMs (Gemini Pro and Gemini Ultra) as teachers for annotating large amounts of multi-domain synthetic distillation data, we can train smaller student models (Gemma 1 2B and 7B) with results similar to the teacher LLMs. We then demonstrate that our technique leads to effective domain generalization, by annotating data in two domains outside the original training data and evaluating on them. Finally, as a key contribution of the paper, we share an easy-to-use API for NLP practitioners to use.
Typ des Eintrags: | Konferenzveröffentlichung |
---|---|
Erschienen: | 2024 |
Autor(en): | Hosseini, Mohammad Javad ; Gao, Yang ; Baumgärtner, Tim ; Fabrikant, Alex ; Amplayo, Reinald Kim |
Art des Eintrags: | Bibliographie |
Titel: | Scalable and Domain-General Abstractive Proposition Segmentation |
Sprache: | Englisch |
Publikationsjahr: | November 2024 |
Verlag: | ACL |
Buchtitel: | EMNLP 2024: The 2024 Conference on Empirical Methods in Natural Language Processing: Findings of EMNLP 2024 |
Veranstaltungstitel: | 29th Conference on Empirical Methods in Natural Language Processing |
Veranstaltungsort: | Miami, USA |
Veranstaltungsdatum: | 12.11.2024 - 16.11.2024 |
DOI: | 10.18653/v1/2024.findings-emnlp.517 |
URL / URN: | https://aclanthology.org/2024.findings-emnlp.517/ |
Kurzbeschreibung (Abstract): | Segmenting text into fine-grained units of meaning is important to a wide range of NLP applications. The default approach of segmenting text into sentences is often insufficient, especially since sentences are usually complex enough to include multiple units of meaning that merit separate treatment in the downstream task. We focus on the task of abstractive proposition segmentation (APS): transforming text into simple, self-contained, well-formed sentences. Several recent works have demonstrated the utility of proposition segmentation with few-shot prompted LLMs for downstream tasks such as retrieval-augmented grounding and fact verification. However, this approach does not scale to large amounts of text and may not always extract all the facts from the input text.In this paper, we first introduce evaluation metrics for the task to measure several dimensions of quality.We then propose a scalable, yet accurate, proposition segmentation model. We model proposition segmentation as a supervised task by training LLMs on existing annotated datasets and show that training yields significantly improved results. We further show that by using the fine-tuned LLMs (Gemini Pro and Gemini Ultra) as teachers for annotating large amounts of multi-domain synthetic distillation data, we can train smaller student models (Gemma 1 2B and 7B) with results similar to the teacher LLMs. We then demonstrate that our technique leads to effective domain generalization, by annotating data in two domains outside the original training data and evaluating on them. Finally, as a key contribution of the paper, we share an easy-to-use API for NLP practitioners to use. |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung |
Hinterlegungsdatum: | 17 Dez 2024 11:40 |
Letzte Änderung: | 17 Dez 2024 11:40 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |