Peters, Felix (2023)
Human-AI Interaction – Investigating the Impact on Individuals and Organizations.
Technische Universität Darmstadt
doi: 10.26083/tuprints-00023070
Dissertation, Erstveröffentlichung, Verlagsversion
Kurzbeschreibung (Abstract)
Artificial intelligence (AI) has become increasingly prevalent in consumer and business applications, equally affecting individuals and organizations. The emergence of AI-enabled systems, i.e., systems harnessing AI capabilities that are powered by machine learning (ML), is primarily driven by three technological trends and innovations: increased use of cloud computing allowing large-scale data collection, the development of specialized hardware, and the availability of software tools for developing AI-enabled systems. However, recent research has mainly focused on technological innovations, largely neglecting the interaction between humans and AI-enabled systems. Compared to previous technologies, AI-enabled systems possess some unique characteristics that make the design of human-AI interaction (HAI) particularly challenging. Examples of such challenges include the probabilistic nature of AIenabled systems due to their dependence on statistical patterns identified in data and their ability to take over predictive tasks previously reserved for humans. Thus, it is widely agreed that existing guidelines for human-computer interaction (HCI) need to be extended to maximize the potential of this groundbreaking technology. This thesis attempts to tackle this research gap by examining both individual-level and organizational-level impacts of increasing HAI.
Regarding the impact of HAI on individuals, two widely discussed issues are how the opacity of complex AI-enabled systems affects the user interaction and how the increasing deployment of AI-enabled systems affects performance on specific tasks. Consequently, papers A and B of this cumulative thesis address these issues.
Paper A addresses the lack of user-centric research in the field of explainable AI (XAI), which is concerned with making AI-enabled systems more transparent for end-users. It is investigated how individuals perceive explainability features of AI-enabled systems, i.e., features which aim to enhance transparency. To answer this research question, an online lab experiment with a subsequent survey is conducted in the context of credit scoring. The contributions of this study are two-fold. First, based on the experiment, it can be observed that individuals positively perceive explainability features and have a significant willingness to pay for them. Second, the theoretical model for explaining the purchase decision shows that increased perceived transparency leads to increased user trust and a more positive evaluation of the AI-enabled system.
Paper B aims to identify task and technology characteristics that determine the fit between an individual's tasks and an AI-enabled system, as this is commonly believed to be the main driver for system utilization and individual performance. Based on a qualitative research approach in the form of expert interviews, AI-specific factors for task and technology characteristics, as well as the task-technology fit, are developed. The resulting theoretical model enables empirical research to investigate the relationship between task-technology fit and individual performance and can also be applied by practitioners to evaluate use cases of AI-enabled system deployment.
While the first part of this thesis discusses individual-level impacts of increasing HAI, the second part is concerned with organizational-level impacts. Papers C and D address how the increasing use of AI-enabled systems within organizations affect organizational justice, i.e., the fairness of decision-making processes, and organizational learning, i.e., the accumulation and dissemination of knowledge.
Paper C addresses the issue of organizational justice, as AI-enabled systems are increasingly supporting decision-making tasks that humans previously conducted on their own. In detail, the study examines the effects of deploying an AI-enabled system in the candidate selection phase of the recruiting process. Through an online lab experiment with recruiters from multinational companies, it is shown that the introduction of so-called CV recommender systems, i.e., systems that identify suitable candidates for a given job, positively influences the procedural justice of the recruiting process. More specifically, the objectivity and consistency of the candidate selection process are strengthened, which constitute two essential components of procedural justice.
Paper D examines how the increasing use of AI-enabled systems influences organizational learning processes. The study derives propositions from conducting a series of agent-based simulations. It is found that AI-enabled systems can take over explorative tasks, which enables organizations to counter the longstanding issue of learning myopia, i.e., the human tendency to favor exploitation over exploration. Moreover, it is shown that the ongoing reconfiguration of deployed AI-enabled systems represents an essential activity for organizations aiming to leverage their full potential. Finally, the results suggest that knowledge created by AI-enabled systems can be particularly beneficial for organizations in turbulent environments.
Typ des Eintrags: | Dissertation | ||||
---|---|---|---|---|---|
Erschienen: | 2023 | ||||
Autor(en): | Peters, Felix | ||||
Art des Eintrags: | Erstveröffentlichung | ||||
Titel: | Human-AI Interaction – Investigating the Impact on Individuals and Organizations | ||||
Sprache: | Englisch | ||||
Referenten: | Buxmann, Prof. Dr. Peter ; Stock-Homburg, Prof. Dr. Ruth | ||||
Publikationsjahr: | 2023 | ||||
Ort: | Darmstadt | ||||
Kollation: | XV, 105 Seiten | ||||
Datum der mündlichen Prüfung: | 1 Dezember 2022 | ||||
DOI: | 10.26083/tuprints-00023070 | ||||
URL / URN: | https://tuprints.ulb.tu-darmstadt.de/23070 | ||||
Kurzbeschreibung (Abstract): | Artificial intelligence (AI) has become increasingly prevalent in consumer and business applications, equally affecting individuals and organizations. The emergence of AI-enabled systems, i.e., systems harnessing AI capabilities that are powered by machine learning (ML), is primarily driven by three technological trends and innovations: increased use of cloud computing allowing large-scale data collection, the development of specialized hardware, and the availability of software tools for developing AI-enabled systems. However, recent research has mainly focused on technological innovations, largely neglecting the interaction between humans and AI-enabled systems. Compared to previous technologies, AI-enabled systems possess some unique characteristics that make the design of human-AI interaction (HAI) particularly challenging. Examples of such challenges include the probabilistic nature of AIenabled systems due to their dependence on statistical patterns identified in data and their ability to take over predictive tasks previously reserved for humans. Thus, it is widely agreed that existing guidelines for human-computer interaction (HCI) need to be extended to maximize the potential of this groundbreaking technology. This thesis attempts to tackle this research gap by examining both individual-level and organizational-level impacts of increasing HAI. Regarding the impact of HAI on individuals, two widely discussed issues are how the opacity of complex AI-enabled systems affects the user interaction and how the increasing deployment of AI-enabled systems affects performance on specific tasks. Consequently, papers A and B of this cumulative thesis address these issues. Paper A addresses the lack of user-centric research in the field of explainable AI (XAI), which is concerned with making AI-enabled systems more transparent for end-users. It is investigated how individuals perceive explainability features of AI-enabled systems, i.e., features which aim to enhance transparency. To answer this research question, an online lab experiment with a subsequent survey is conducted in the context of credit scoring. The contributions of this study are two-fold. First, based on the experiment, it can be observed that individuals positively perceive explainability features and have a significant willingness to pay for them. Second, the theoretical model for explaining the purchase decision shows that increased perceived transparency leads to increased user trust and a more positive evaluation of the AI-enabled system. Paper B aims to identify task and technology characteristics that determine the fit between an individual's tasks and an AI-enabled system, as this is commonly believed to be the main driver for system utilization and individual performance. Based on a qualitative research approach in the form of expert interviews, AI-specific factors for task and technology characteristics, as well as the task-technology fit, are developed. The resulting theoretical model enables empirical research to investigate the relationship between task-technology fit and individual performance and can also be applied by practitioners to evaluate use cases of AI-enabled system deployment. While the first part of this thesis discusses individual-level impacts of increasing HAI, the second part is concerned with organizational-level impacts. Papers C and D address how the increasing use of AI-enabled systems within organizations affect organizational justice, i.e., the fairness of decision-making processes, and organizational learning, i.e., the accumulation and dissemination of knowledge. Paper C addresses the issue of organizational justice, as AI-enabled systems are increasingly supporting decision-making tasks that humans previously conducted on their own. In detail, the study examines the effects of deploying an AI-enabled system in the candidate selection phase of the recruiting process. Through an online lab experiment with recruiters from multinational companies, it is shown that the introduction of so-called CV recommender systems, i.e., systems that identify suitable candidates for a given job, positively influences the procedural justice of the recruiting process. More specifically, the objectivity and consistency of the candidate selection process are strengthened, which constitute two essential components of procedural justice. Paper D examines how the increasing use of AI-enabled systems influences organizational learning processes. The study derives propositions from conducting a series of agent-based simulations. It is found that AI-enabled systems can take over explorative tasks, which enables organizations to counter the longstanding issue of learning myopia, i.e., the human tendency to favor exploitation over exploration. Moreover, it is shown that the ongoing reconfiguration of deployed AI-enabled systems represents an essential activity for organizations aiming to leverage their full potential. Finally, the results suggest that knowledge created by AI-enabled systems can be particularly beneficial for organizations in turbulent environments. |
||||
Alternatives oder übersetztes Abstract: |
|
||||
Status: | Verlagsversion | ||||
URN: | urn:nbn:de:tuda-tuprints-230700 | ||||
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik 300 Sozialwissenschaften > 330 Wirtschaft |
||||
Fachbereich(e)/-gebiet(e): | 01 Fachbereich Rechts- und Wirtschaftswissenschaften 01 Fachbereich Rechts- und Wirtschaftswissenschaften > Betriebswirtschaftliche Fachgebiete 01 Fachbereich Rechts- und Wirtschaftswissenschaften > Betriebswirtschaftliche Fachgebiete > Wirtschaftsinformatik |
||||
Hinterlegungsdatum: | 27 Jan 2023 13:19 | ||||
Letzte Änderung: | 31 Jan 2023 13:46 | ||||
PPN: | |||||
Referenten: | Buxmann, Prof. Dr. Peter ; Stock-Homburg, Prof. Dr. Ruth | ||||
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: | 1 Dezember 2022 | ||||
Export: | |||||
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |