Marasovic, Ana and Born, Leo and Opitz, Juri and Frank, Anette (2017):
A Mention-Ranking Model for Abstract Anaphora Resolution.
In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 221-232,
Copenhagen, Denmark, [Conference or Workshop Item]
Abstract
Resolving abstract anaphora is an important, but difficult task for text understanding. Yet, with recent advances in representation learning this task becomes a more tangible aim. A central property of abstract anaphora is that it establishes a relation between the anaphor embedded in the anaphoric sentence and its (typically non-nominal) antecedent. We propose a mention-ranking model that learns how abstract anaphors relate to their antecedents with an LSTM-Siamese Net. We overcome the lack of training data by generating artificial anaphoric sentence--antecedent pairs. Our model outperforms state-of-the-art results on shell noun resolution. We also report first benchmark results on an abstract anaphora subset of the ARRAU corpus. This corpus presents a greater challenge due to a mixture of nominal and pronominal anaphors and a greater range of confounders. We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors. Our model selects syntactically plausible candidates and -- if disregarding syntax -- discriminates candidates using deeper features.
Item Type: | Conference or Workshop Item |
---|---|
Erschienen: | 2017 |
Creators: | Marasovic, Ana and Born, Leo and Opitz, Juri and Frank, Anette |
Title: | A Mention-Ranking Model for Abstract Anaphora Resolution |
Language: | German |
Abstract: | Resolving abstract anaphora is an important, but difficult task for text understanding. Yet, with recent advances in representation learning this task becomes a more tangible aim. A central property of abstract anaphora is that it establishes a relation between the anaphor embedded in the anaphoric sentence and its (typically non-nominal) antecedent. We propose a mention-ranking model that learns how abstract anaphors relate to their antecedents with an LSTM-Siamese Net. We overcome the lack of training data by generating artificial anaphoric sentence--antecedent pairs. Our model outperforms state-of-the-art results on shell noun resolution. We also report first benchmark results on an abstract anaphora subset of the ARRAU corpus. This corpus presents a greater challenge due to a mixture of nominal and pronominal anaphors and a greater range of confounders. We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors. Our model selects syntactically plausible candidates and -- if disregarding syntax -- discriminates candidates using deeper features. |
Title of Book: | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP) |
Uncontrolled Keywords: | AIPHES_area_a3 |
Divisions: | DFG-Graduiertenkollegs DFG-Graduiertenkollegs > Research Training Group 1994 Adaptive Preparation of Information from Heterogeneous Sources |
Event Location: | Copenhagen, Denmark |
Date Deposited: | 03 Jul 2017 21:49 |
Official URL: | http://www.aclweb.org/anthology/D/D17/D17-1021.pdf |
Identification Number: | TUD-CS-2017-0149 |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
![]() |
Send an inquiry |
Options (only for editors)
![]() |
Show editorial Details |