TU Darmstadt / ULB / TUbiblio

Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for Misinformation

Glockner, Max ; Hou, Yufang ; Gurevych, Iryna (2022)
Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for Misinformation.
2022 Conference on Empirical Methods in Natural Language Processing. Abu Dhabi, United Arab Emirates (07.-11.12.2022)
Conference or Workshop Item, Bibliographie

Abstract

Misinformation emerges in times of uncertainty when credible information is limited. This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available. Despite increasing interest in automatic fact-checking, it is still unclear if automated approaches can realistically refute harmful real-world misinformation. Here, we contrast and compare NLP fact-checking with how professional fact-checkers combat misinformation in the absence of counter-evidence. In our analysis, we show that, by design, existing NLP task definitions for fact-checking cannot refute misinformation as professional fact-checkers do for the majority of claims. We then define two requirements that the evidence in datasets must fulfill for realistic fact-checking: It must be (1) sufficient to refute the claim and (2) not leaked from existing fact-checking articles. We survey existing fact-checking datasets and find that all of them fail to satisfy both criteria. Finally, we perform experiments to demonstrate that models trained on a large-scale fact-checking dataset rely on leaked evidence, which makes them unsuitable in real-world scenarios. Taken together, we show that current NLP fact-checking cannot realistically combat real-world misinformation because it depends on unrealistic assumptions about counter-evidence in the data.

Item Type: Conference or Workshop Item
Erschienen: 2022
Creators: Glockner, Max ; Hou, Yufang ; Gurevych, Iryna
Type of entry: Bibliographie
Title: Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for Misinformation
Language: English
Date: December 2022
Publisher: ACL
Book Title: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Event Title: 2022 Conference on Empirical Methods in Natural Language Processing
Event Location: Abu Dhabi, United Arab Emirates
Event Dates: 07.-11.12.2022
URL / URN: https://aclanthology.org/2022.emnlp-main.397
Abstract:

Misinformation emerges in times of uncertainty when credible information is limited. This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available. Despite increasing interest in automatic fact-checking, it is still unclear if automated approaches can realistically refute harmful real-world misinformation. Here, we contrast and compare NLP fact-checking with how professional fact-checkers combat misinformation in the absence of counter-evidence. In our analysis, we show that, by design, existing NLP task definitions for fact-checking cannot refute misinformation as professional fact-checkers do for the majority of claims. We then define two requirements that the evidence in datasets must fulfill for realistic fact-checking: It must be (1) sufficient to refute the claim and (2) not leaked from existing fact-checking articles. We survey existing fact-checking datasets and find that all of them fail to satisfy both criteria. Finally, we perform experiments to demonstrate that models trained on a large-scale fact-checking dataset rely on leaked evidence, which makes them unsuitable in real-world scenarios. Taken together, we show that current NLP fact-checking cannot realistically combat real-world misinformation because it depends on unrealistic assumptions about counter-evidence in the data.

Uncontrolled Keywords: UKP_p_texprax, UKP_p_seditrah_factcheck
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Ubiquitous Knowledge Processing
Date Deposited: 27 Feb 2023 15:22
Last Modified: 13 Jun 2023 16:30
PPN: 507922670
Export:
Suche nach Titel in: TUfind oder in Google
Send an inquiry Send an inquiry

Options (only for editors)
Show editorial Details Show editorial Details