TU Darmstadt / ULB / TUbiblio

Story Cloze Ending Selection Baselines and Data Examination

Mihaylov, Todor and Frank, Anette (2017):
Story Cloze Ending Selection Baselines and Data Examination.
In: Proceedings of the Linking Models of Lexical, Sentential and Discourse-level Semantics – Shared Task, DOI: 10.18653/v1/W17-0913,
[Online-Edition: http://aclweb.org/anthology/W17-0913],
[Conference or Workshop Item]

Abstract

This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al. 2016). We first build a classifier using features based on word embeddings and semantic similarity computation. We further implement a neural LSTM system with different encoding strategies that try to model the relation between the story and the provided endings. Our experiments show that a model using representation features based on average word embedding vectors over the given story words and the candidate ending sentences words, joint with similarity features between the story and candidate ending representations performed better than the neural models. Our best model achieves an accuracy of 72.42, ranking 3rd in the official evaluation.

Item Type: Conference or Workshop Item
Erschienen: 2017
Creators: Mihaylov, Todor and Frank, Anette
Title: Story Cloze Ending Selection Baselines and Data Examination
Language: German
Abstract:

This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al. 2016). We first build a classifier using features based on word embeddings and semantic similarity computation. We further implement a neural LSTM system with different encoding strategies that try to model the relation between the story and the provided endings. Our experiments show that a model using representation features based on average word embedding vectors over the given story words and the candidate ending sentences words, joint with similarity features between the story and candidate ending representations performed better than the neural models. Our best model achieves an accuracy of 72.42, ranking 3rd in the official evaluation.

Title of Book: Proceedings of the Linking Models of Lexical, Sentential and Discourse-level Semantics – Shared Task
Uncontrolled Keywords: AIPHES_area_a2
Divisions: DFG-Graduiertenkollegs
DFG-Graduiertenkollegs > Research Training Group 1994 Adaptive Preparation of Information from Heterogeneous Sources
Date Deposited: 09 Mar 2017 20:28
DOI: 10.18653/v1/W17-0913
Official URL: http://aclweb.org/anthology/W17-0913
Additional Information:

This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al. 2016). We first build a classifier using features based on word embeddings and semantic similarity computation. We further implement a neural LSTM system with different encoding strategies that try to model the relation between the story and the provided endings. Our experiments show that a model using representation features based on average word embedding vectors over the given story words and the candidate ending sentences words, joint with similarity features between the story and candidate ending representations performed better than the neural models. Our best model achieves an accuracy of 72.42, ranking 3rd in the official evaluation.

Identification Number: TUD-CS-2017-0062
Related URLs:
Export:
Suche nach Titel in: TUfind oder in Google

Optionen (nur für Redakteure)

View Item View Item