TU Darmstadt / ULB / TUbiblio

Task-Oriented Intrinsic Evaluation of Semantic Textual Similarity

Reimers, Nils and Beyer, Philip and Gurevych, Iryna (2016):
Task-Oriented Intrinsic Evaluation of Semantic Textual Similarity.
In: Proceedings of the 26th International Conference on Computational Linguistics (COLING), Osaka, Japan, [Online-Edition: http://aclweb.org/anthology/C16-1009],
[Conference or Workshop Item]

Abstract

Semantic Textual Similarity (STS) is a foundational NLP task and can be used in a wide range of tasks. To determine the STS of two texts, hundreds of different STS systems exist, however, for an NLP system designer, it is hard to decide which system is the best on. To answer this question, an intrinsic evaluation of the STS systems is conducted by comparing the output of the system to human judgments on semantic similarity. The comparison is usually done using Pearson cor- relation. In this work, we show that relying on intrinsic evaluations with Pearson correlation can be misleading. In three common STS based tasks we could observe that the Pearson correlation was especially ill-suited to detect the best STS system for the task and other evaluation measures were much better suited. In this work we define how the validity of an intrinsic evaluation can be assessed and compare different intrinsic evaluation methods. Understanding of the properties of the targeted task is crucial and we propose a framework for conducting the intrinsic evaluation which takes the properties of the targeted task into account.

Item Type: Conference or Workshop Item
Erschienen: 2016
Creators: Reimers, Nils and Beyer, Philip and Gurevych, Iryna
Title: Task-Oriented Intrinsic Evaluation of Semantic Textual Similarity
Language: English
Abstract:

Semantic Textual Similarity (STS) is a foundational NLP task and can be used in a wide range of tasks. To determine the STS of two texts, hundreds of different STS systems exist, however, for an NLP system designer, it is hard to decide which system is the best on. To answer this question, an intrinsic evaluation of the STS systems is conducted by comparing the output of the system to human judgments on semantic similarity. The comparison is usually done using Pearson cor- relation. In this work, we show that relying on intrinsic evaluations with Pearson correlation can be misleading. In three common STS based tasks we could observe that the Pearson correlation was especially ill-suited to detect the best STS system for the task and other evaluation measures were much better suited. In this work we define how the validity of an intrinsic evaluation can be assessed and compare different intrinsic evaluation methods. Understanding of the properties of the targeted task is crucial and we propose a framework for conducting the intrinsic evaluation which takes the properties of the targeted task into account.

Title of Book: Proceedings of the 26th International Conference on Computational Linguistics (COLING)
Uncontrolled Keywords: UKP_reviewed
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Ubiquitous Knowledge Processing
DFG-Graduiertenkollegs
DFG-Graduiertenkollegs > Research Training Group 1994 Adaptive Preparation of Information from Heterogeneous Sources
Event Location: Osaka, Japan
Date Deposited: 31 Dec 2016 14:29
Official URL: http://aclweb.org/anthology/C16-1009
Identification Number: TUD-CS-2016-1451
Export:

Optionen (nur für Redakteure)

View Item View Item