TU Darmstadt / ULB / TUbiblio

Regularization of Distinct Strategies for Unsupervised Question Generation

Kang, Junmo ; Hong, Giwon ; Puerto San Roman, Haritz ; Myaeng, Sung-Hyon (2020)
Regularization of Distinct Strategies for Unsupervised Question Generation.
2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020). virtual Conference (16.11.2020-20.11.2020)
doi: 10.18653/v1/2020.findings-emnlp.293
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Unsupervised question answering (UQA) has been proposed to avoid the high cost of creating high-quality datasets for QA. One approach to UQA is to train a QA model with questions generated automatically. However, the generated questions are either too similar to a word sequence in the context or too drifted from the semantics of the context, thereby making it difficult to train a robust QA model. We propose a novel regularization method based on teacher-student architecture to avoid bias toward a particular question generation strategy and modulate the process of generating individual words when a question is generated. Our experiments demonstrate that we have achieved the goal of generating higher-quality questions for UQA across diverse QA datasets and tasks. We also show that this method can be useful for creating a QA model with few-shot learning.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2020
Autor(en): Kang, Junmo ; Hong, Giwon ; Puerto San Roman, Haritz ; Myaeng, Sung-Hyon
Art des Eintrags: Bibliographie
Titel: Regularization of Distinct Strategies for Unsupervised Question Generation
Sprache: Englisch
Publikationsjahr: 21 November 2020
Verlag: ACL
Buchtitel: Findings of the Association for Computational Linguistics: EMNLP 2020
Veranstaltungstitel: 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)
Veranstaltungsort: virtual Conference
Veranstaltungsdatum: 16.11.2020-20.11.2020
DOI: 10.18653/v1/2020.findings-emnlp.293
Kurzbeschreibung (Abstract):

Unsupervised question answering (UQA) has been proposed to avoid the high cost of creating high-quality datasets for QA. One approach to UQA is to train a QA model with questions generated automatically. However, the generated questions are either too similar to a word sequence in the context or too drifted from the semantics of the context, thereby making it difficult to train a robust QA model. We propose a novel regularization method based on teacher-student architecture to avoid bias toward a particular question generation strategy and modulate the process of generating individual words when a question is generated. Our experiments demonstrate that we have achieved the goal of generating higher-quality questions for UQA across diverse QA datasets and tasks. We also show that this method can be useful for creating a QA model with few-shot learning.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 06 Jul 2023 08:18
Letzte Änderung: 07 Jul 2023 08:59
PPN: 509437834
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen