TU Darmstadt / ULB / TUbiblio

“Image, Tell me your story!” Predicting the original meta-context of visual misinformation

Tonglet, Jonathan ; Moens, Marie-Francine ; Gurevych, Iryna (2024)
“Image, Tell me your story!” Predicting the original meta-context of visual misinformation.
29th Conference on Empirical Methods in Natural Language Processing. Miami, USA (12.11.2024 - 16.11.2024)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

To assist human fact-checkers, researchers have developed automated approaches for visual misinformation detection. These methods assign veracity scores by identifying inconsistencies between the image and its caption, or by detecting forgeries in the image. However, they neglect a crucial point of the human fact-checking process: identifying the original meta-context of the image. By explaining what is actually true about the image, fact-checkers can better detect misinformation, focus their efforts on check-worthy visual content, engage in counter-messaging before misinformation spreads widely, and make their explanation more convincing. Here, we fill this gap by introducing the task of automated image contextualization. We create 5Pils, a dataset of 1,676 fact-checked images with question-answer pairs about their original meta-context. Annotations are based on the 5 Pillars fact-checking framework. We implement a first baseline that grounds the image in its original meta-context using the content of the image and textual evidence retrieved from the open web. Our experiments show promising results while highlighting several open challenges in retrieval and reasoning.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2024
Autor(en): Tonglet, Jonathan ; Moens, Marie-Francine ; Gurevych, Iryna
Art des Eintrags: Bibliographie
Titel: “Image, Tell me your story!” Predicting the original meta-context of visual misinformation
Sprache: Englisch
Publikationsjahr: 17 November 2024
Ort: Miami, Florida
Verlag: Association for Computational Linguistics
Buchtitel: EMNLP 2024: The 2024 Conference on Empirical Methods in Natural Language Processing: Proceedings of the Conference
Veranstaltungstitel: 29th Conference on Empirical Methods in Natural Language Processing
Veranstaltungsort: Miami, USA
Veranstaltungsdatum: 12.11.2024 - 16.11.2024
URL / URN: https://aclanthology.org/2024.emnlp-main.448/
Kurzbeschreibung (Abstract):

To assist human fact-checkers, researchers have developed automated approaches for visual misinformation detection. These methods assign veracity scores by identifying inconsistencies between the image and its caption, or by detecting forgeries in the image. However, they neglect a crucial point of the human fact-checking process: identifying the original meta-context of the image. By explaining what is actually true about the image, fact-checkers can better detect misinformation, focus their efforts on check-worthy visual content, engage in counter-messaging before misinformation spreads widely, and make their explanation more convincing. Here, we fill this gap by introducing the task of automated image contextualization. We create 5Pils, a dataset of 1,676 fact-checked images with question-answer pairs about their original meta-context. Annotations are based on the 5 Pillars fact-checking framework. We implement a first baseline that grounds the image in its original meta-context using the content of the image and textual evidence retrieved from the open web. Our experiments show promising results while highlighting several open challenges in retrieval and reasoning.

Freie Schlagworte: UKP_p_seditrah_factcheck, UKP_p_emergencity, emergenCITY
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 28 Nov 2024 09:49
Letzte Änderung: 28 Nov 2024 09:49
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen