TU Darmstadt / ULB / TUbiblio

Evaluating Coreference Resolvers on Community-based Question Answering: From Rule-based to State of the Art

Chai, Haixia ; Moosavi, Nafise Sadat ; Gurevych, Iryna ; Strube, Michael (2022)
Evaluating Coreference Resolvers on Community-based Question Answering: From Rule-based to State of the Art.
29th International Conference on Computational Linguistics (COLING 2022). Gyeongju, Republic of Korea (12.10.2022-17.10.2022)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Coreference resolution is a key step in natural language understanding. Developments in coreference resolution are mainly focused on improving the performance on standard datasets annotated for coreference resolution. However, coreference resolution is an intermediate step for text understanding and it is not clear how these improvements translate into downstream task performance. In this paper, we perform a thorough investigation on the impact of coreference resolvers in multiple settings of community-based question answering task, i.e., answer selection with long answers. Our settings cover multiple text domains and encompass several answer selection methods. We first inspect extrinsic evaluation of coreference resolvers on answer selection by using coreference relations to decontextualize individual sentences of candidate answers, and then annotate a subset of answers with coreference information for intrinsic evaluation. The results of our extrinsic evaluation show that while there is a significant difference between the performance of the rule-based system vs. state-of-the-art neural model on coreference resolution datasets, we do not observe a considerable difference on their impact on downstream models. Our intrinsic evaluation shows that (i) resolving coreference relations on less-formal text genres is more difficult even for trained annotators, and (ii) the values of linguistic-agnostic coreference evaluation metrics do not correlate with the impact on downstream data.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Chai, Haixia ; Moosavi, Nafise Sadat ; Gurevych, Iryna ; Strube, Michael
Art des Eintrags: Bibliographie
Titel: Evaluating Coreference Resolvers on Community-based Question Answering: From Rule-based to State of the Art
Sprache: Englisch
Publikationsjahr: 19 Oktober 2022
Verlag: ACL
Buchtitel: Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
Reihe: International Conference on Computational Linguistics: Proceedings of the Conference and Workshops
Band einer Reihe: 29
Veranstaltungstitel: 29th International Conference on Computational Linguistics (COLING 2022)
Veranstaltungsort: Gyeongju, Republic of Korea
Veranstaltungsdatum: 12.10.2022-17.10.2022
URL / URN: https://aclanthology.org/2022.crac-1.7
Kurzbeschreibung (Abstract):

Coreference resolution is a key step in natural language understanding. Developments in coreference resolution are mainly focused on improving the performance on standard datasets annotated for coreference resolution. However, coreference resolution is an intermediate step for text understanding and it is not clear how these improvements translate into downstream task performance. In this paper, we perform a thorough investigation on the impact of coreference resolvers in multiple settings of community-based question answering task, i.e., answer selection with long answers. Our settings cover multiple text domains and encompass several answer selection methods. We first inspect extrinsic evaluation of coreference resolvers on answer selection by using coreference relations to decontextualize individual sentences of candidate answers, and then annotate a subset of answers with coreference information for intrinsic evaluation. The results of our extrinsic evaluation show that while there is a significant difference between the performance of the rule-based system vs. state-of-the-art neural model on coreference resolution datasets, we do not observe a considerable difference on their impact on downstream models. Our intrinsic evaluation shows that (i) resolving coreference relations on less-formal text genres is more difficult even for trained annotators, and (ii) the values of linguistic-agnostic coreference evaluation metrics do not correlate with the impact on downstream data.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 20 Okt 2022 07:20
Letzte Änderung: 09 Feb 2023 10:31
PPN: 504726161
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen