TU Darmstadt / ULB / TUbiblio

LSDSem 2017: Exploring Data Generation Methods for the Story Cloze Test

Bugert, Michael and Puzikov, Yevgeniy and Rücklé, Andreas and Eckle-Kohler, Judith and Martin, Teresa and Martínez Cámara, Eugenio and Sorokin, Daniil and Peyrard, Maxime and Gurevych, Iryna :
LSDSem 2017: Exploring Data Generation Methods for the Story Cloze Test.
[Online-Edition: http://aclweb.org/anthology/W17-0908]
In: The 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, 03.04.2017--04.04.2017, Valencia, Spain. Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics Association for Computational Linguistics
[Conference or Workshop Item] , (2017)

Official URL: http://aclweb.org/anthology/W17-0908

Abstract

The Story Cloze test is a recent effort in providing a common test scenario for text understanding systems. As part of the LSDSem 2017 shared task, we present a system based on a deep learning architecture combined with a rich set of manually-crafted linguistic features. The system outperforms all known baselines for the task, suggesting that the chosen approach is promising. We additionally present two methods for generating further training data based on stories from the ROCStories corpus. Our system and generated data are publicly available on GitHub.

Item Type: Conference or Workshop Item
Erschienen: 2017
Creators: Bugert, Michael and Puzikov, Yevgeniy and Rücklé, Andreas and Eckle-Kohler, Judith and Martin, Teresa and Martínez Cámara, Eugenio and Sorokin, Daniil and Peyrard, Maxime and Gurevych, Iryna
Title: LSDSem 2017: Exploring Data Generation Methods for the Story Cloze Test
Language: English
Abstract:

The Story Cloze test is a recent effort in providing a common test scenario for text understanding systems. As part of the LSDSem 2017 shared task, we present a system based on a deep learning architecture combined with a rich set of manually-crafted linguistic features. The system outperforms all known baselines for the task, suggesting that the chosen approach is promising. We additionally present two methods for generating further training data based on stories from the ROCStories corpus. Our system and generated data are publicly available on GitHub.

Title of Book: Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics
Publisher: Association for Computational Linguistics
Uncontrolled Keywords: UKP_p_DIP;UKP_p_QAEduInf;UKP_reviewed;UKP_a_DLinNLP;UKP_a_LSRA;AIPHES
Divisions: Department of Computer Science
Department of Computer Science > Ubiquitous Knowledge Processing
DFG-Graduiertenkollegs
DFG-Graduiertenkollegs > Research Training Group 1994 Adaptive Preparation of Information from Heterogeneous Sources
Event Title: The 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics
Event Location: Valencia, Spain
Event Dates: 03.04.2017--04.04.2017
Date Deposited: 21 Feb 2017 20:05
Official URL: http://aclweb.org/anthology/W17-0908
Identification Number: TUD-CS-2017-0040
Related URLs:
Projects: AIPHES, UKP_p_DIP, UKP_p_QAUduInf
Funders: German Research Foundation, grant No.GU 798/17-1, German Research Foundation, research training group “Adaptive Preparation of Information form Heterogeneous Sources” (AIPHES), GRK 1994/1, QA-EduInf project, grants No. GU 798/18-1 and No. RI 803/12-1
Export:

Optionen (nur für Redakteure)

View Item View Item