TU Darmstadt / ULB / TUbiblio

Towards Understanding and Arguing with Classifiers: Recent Progress

Shao, Xiaoting ; Rienstra, Tjitze ; Thimm, Matthias ; Kersting, Kristian (2024)
Towards Understanding and Arguing with Classifiers: Recent Progress.
In: Datenbank-Spektrum : Zeitschrift für Datenbanktechnologien und Information Retrieval, 2020, 20 (2)
doi: 10.26083/tuprints-00024012
Artikel, Zweitveröffentlichung, Verlagsversion

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

Machine learning and argumentation can potentially greatly benefit from each other. Combining deep classifiers with knowledge expressed in the form of rules and constraints allows one to leverage different forms of abstractions within argumentation mining. Argumentation for machine learning can yield argumentation-based learning methods where the machine and the user argue about the learned model with the common goal of providing results of maximum utility to the user. Unfortunately, both directions are currently rather challenging. For instance, combining deep neural models with logic typically only yields deterministic results, while combining probabilistic models with logic often results in intractable inference. Therefore, we review a novel deep but tractable model for conditional probability distributions that can harness the expressive power of universal function approximators such as neural networks while still maintaining a wide range of tractable inference routines. While this new model has shown appealing performance in classification tasks, humans cannot easily understand the reasons for its decision. Therefore, we also review our recent efforts on how to "argue" with deep models. On synthetic and real data we illustrate how "arguing" with a deep model about its explanations can actually help to revise the model, if it is right for the wrong reasons.

Typ des Eintrags: Artikel
Erschienen: 2024
Autor(en): Shao, Xiaoting ; Rienstra, Tjitze ; Thimm, Matthias ; Kersting, Kristian
Art des Eintrags: Zweitveröffentlichung
Titel: Towards Understanding and Arguing with Classifiers: Recent Progress
Sprache: Englisch
Publikationsjahr: 26 April 2024
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: Juli 2020
Ort der Erstveröffentlichung: Berlin ; Heidelberg
Verlag: Springer
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Datenbank-Spektrum : Zeitschrift für Datenbanktechnologien und Information Retrieval
Jahrgang/Volume einer Zeitschrift: 20
(Heft-)Nummer: 2
DOI: 10.26083/tuprints-00024012
URL / URN: https://tuprints.ulb.tu-darmstadt.de/24012
Zugehörige Links:
Herkunft: Zweitveröffentlichung DeepGreen
Kurzbeschreibung (Abstract):

Machine learning and argumentation can potentially greatly benefit from each other. Combining deep classifiers with knowledge expressed in the form of rules and constraints allows one to leverage different forms of abstractions within argumentation mining. Argumentation for machine learning can yield argumentation-based learning methods where the machine and the user argue about the learned model with the common goal of providing results of maximum utility to the user. Unfortunately, both directions are currently rather challenging. For instance, combining deep neural models with logic typically only yields deterministic results, while combining probabilistic models with logic often results in intractable inference. Therefore, we review a novel deep but tractable model for conditional probability distributions that can harness the expressive power of universal function approximators such as neural networks while still maintaining a wide range of tractable inference routines. While this new model has shown appealing performance in classification tasks, humans cannot easily understand the reasons for its decision. Therefore, we also review our recent efforts on how to "argue" with deep models. On synthetic and real data we illustrate how "arguing" with a deep model about its explanations can actually help to revise the model, if it is right for the wrong reasons.

Freie Schlagworte: Argumentation-based ML, Explainable AI, Interactive ML, Influence Function, Deep Density Estimation, Probabilistic Circuits
Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-240129
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Künstliche Intelligenz und Maschinelles Lernen
Zentrale Einrichtungen
Zentrale Einrichtungen > Centre for Cognitive Science (CCS)
Hinterlegungsdatum: 26 Apr 2024 12:38
Letzte Änderung: 30 Apr 2024 08:47
PPN:
Zugehörige Links:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen