TU Darmstadt / ULB / TUbiblio

Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods

Jukić, Josip ; Tutek, Martin ; Snajder, Jan (2023)
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods.
61st Annual Meeting of the Association for Computational Linguistics. Toronto, Canada (09.-14.07.2023)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

A popular approach to unveiling the black box of neural NLP models is to leverage saliency methods, which assign scalar importance scores to each input component. A common practice for evaluating whether an interpretability method is faithful has been to use evaluation-by-agreement – if multiple methods agree on an explanation, its credibility increases. However, recent work has found that saliency methods exhibit weak rank correlations even when applied to the same model instance and advocated for alternative diagnostic methods. In our work, we demonstrate that rank correlation is not a good fit for evaluating agreement and argue that Pearson-r is a better-suited alternative. We further show that regularization techniques that increase faithfulness of attention explanations also increase agreement between saliency methods. By connecting our findings to instance categories based on training dynamics, we show that the agreement of saliency method explanations is very low for easy-to-learn instances. Finally, we connect the improvement in agreement across instance categories to local representation space statistics of instances, paving the way for work on analyzing which intrinsic model properties improve their predisposition to interpretability methods.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2023
Autor(en): Jukić, Josip ; Tutek, Martin ; Snajder, Jan
Art des Eintrags: Bibliographie
Titel: Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Sprache: Englisch
Publikationsjahr: 10 Juli 2023
Verlag: ACL
Buchtitel: Findings of the Association for Computational Linguistics: ACL 2023
Veranstaltungstitel: 61st Annual Meeting of the Association for Computational Linguistics
Veranstaltungsort: Toronto, Canada
Veranstaltungsdatum: 09.-14.07.2023
URL / URN: https://aclanthology.org/2023.findings-acl.582/
Kurzbeschreibung (Abstract):

A popular approach to unveiling the black box of neural NLP models is to leverage saliency methods, which assign scalar importance scores to each input component. A common practice for evaluating whether an interpretability method is faithful has been to use evaluation-by-agreement – if multiple methods agree on an explanation, its credibility increases. However, recent work has found that saliency methods exhibit weak rank correlations even when applied to the same model instance and advocated for alternative diagnostic methods. In our work, we demonstrate that rank correlation is not a good fit for evaluating agreement and argue that Pearson-r is a better-suited alternative. We further show that regularization techniques that increase faithfulness of attention explanations also increase agreement between saliency methods. By connecting our findings to instance categories based on training dynamics, we show that the agreement of saliency method explanations is very low for easy-to-learn instances. Finally, we connect the improvement in agreement across instance categories to local representation space statistics of instances, paving the way for work on analyzing which intrinsic model properties improve their predisposition to interpretability methods.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 25 Jul 2023 07:44
Letzte Änderung: 26 Jul 2023 09:37
PPN: 509926983
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen