TU Darmstadt / ULB / TUbiblio

From human explanations to explainable AI: Insights from constrained optimization

Ibs, Inga ; Ott, Claire ; Jäkel, Frank ; Rothkopf, Constantin A. (2024)
From human explanations to explainable AI: Insights from constrained optimization.
In: Cognitive Systems Research, 88
doi: 10.1016/j.cogsys.2024.101297
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset (>150 participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems.

Typ des Eintrags: Artikel
Erschienen: 2024
Autor(en): Ibs, Inga ; Ott, Claire ; Jäkel, Frank ; Rothkopf, Constantin A.
Art des Eintrags: Bibliographie
Titel: From human explanations to explainable AI: Insights from constrained optimization
Sprache: Deutsch
Publikationsjahr: 2024
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Cognitive Systems Research
Jahrgang/Volume einer Zeitschrift: 88
DOI: 10.1016/j.cogsys.2024.101297
Kurzbeschreibung (Abstract):

Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset (>150 participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems.

Freie Schlagworte: Explanations, Explainable AI, Constrained optimization, Linear programming, Complex problem solving, Microworlds
Fachbereich(e)/-gebiet(e): 03 Fachbereich Humanwissenschaften
Forschungsfelder
Forschungsfelder > Information and Intelligence
Forschungsfelder > Information and Intelligence > Cognitive Science
03 Fachbereich Humanwissenschaften > Institut für Psychologie
03 Fachbereich Humanwissenschaften > Institut für Psychologie > Modelle höherer Kognition
03 Fachbereich Humanwissenschaften > Institut für Psychologie > Psychologie der Informationsverarbeitung
Zentrale Einrichtungen
Zentrale Einrichtungen > Centre for Cognitive Science (CCS)
Hinterlegungsdatum: 04 Nov 2024 10:32
Letzte Änderung: 04 Nov 2024 10:32
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen