Mihaylov, Todor ; Frank, Anette (2017)
Story Cloze Ending Selection Baselines and Data Examination.
doi: 10.18653/v1/W17-0913
Konferenzveröffentlichung, Bibliographie
Kurzbeschreibung (Abstract)
This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al. 2016). We first build a classifier using features based on word embeddings and semantic similarity computation. We further implement a neural LSTM system with different encoding strategies that try to model the relation between the story and the provided endings. Our experiments show that a model using representation features based on average word embedding vectors over the given story words and the candidate ending sentences words, joint with similarity features between the story and candidate ending representations performed better than the neural models. Our best model achieves an accuracy of 72.42, ranking 3rd in the official evaluation.
Typ des Eintrags: | Konferenzveröffentlichung |
---|---|
Erschienen: | 2017 |
Autor(en): | Mihaylov, Todor ; Frank, Anette |
Art des Eintrags: | Bibliographie |
Titel: | Story Cloze Ending Selection Baselines and Data Examination |
Sprache: | Deutsch |
Publikationsjahr: | April 2017 |
Buchtitel: | Proceedings of the Linking Models of Lexical, Sentential and Discourse-level Semantics – Shared Task |
DOI: | 10.18653/v1/W17-0913 |
URL / URN: | http://aclweb.org/anthology/W17-0913 |
Zugehörige Links: | |
Kurzbeschreibung (Abstract): | This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al. 2016). We first build a classifier using features based on word embeddings and semantic similarity computation. We further implement a neural LSTM system with different encoding strategies that try to model the relation between the story and the provided endings. Our experiments show that a model using representation features based on average word embedding vectors over the given story words and the candidate ending sentences words, joint with similarity features between the story and candidate ending representations performed better than the neural models. Our best model achieves an accuracy of 72.42, ranking 3rd in the official evaluation. |
Freie Schlagworte: | AIPHES_area_a2 |
ID-Nummer: | TUD-CS-2017-0062 |
Zusätzliche Informationen: | This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al. 2016). We first build a classifier using features based on word embeddings and semantic similarity computation. We further implement a neural LSTM system with different encoding strategies that try to model the relation between the story and the provided endings. Our experiments show that a model using representation features based on average word embedding vectors over the given story words and the candidate ending sentences words, joint with similarity features between the story and candidate ending representations performed better than the neural models. Our best model achieves an accuracy of 72.42, ranking 3rd in the official evaluation. |
Fachbereich(e)/-gebiet(e): | DFG-Graduiertenkollegs DFG-Graduiertenkollegs > Graduiertenkolleg 1994 Adaptive Informationsaufbereitung aus heterogenen Quellen |
Hinterlegungsdatum: | 09 Mär 2017 20:28 |
Letzte Änderung: | 28 Sep 2018 15:12 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |