TU Darmstadt / ULB / TUbiblio

See No Evil, Hear No Evil: How Users Blindly Overrely on Robots with Automation Bias

Stock-Homburg, Ruth ; Nguyen, Mai Anh (2023)
See No Evil, Hear No Evil: How Users Blindly Overrely on Robots with Automation Bias.
Proceedings of Forty-Second International Conference on Information Systems. Hyderabad (10.12.2023-13.12.2023)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Recent developments in generative artificial intelligence show how quickly users carelessly adhere to intelligent systems, ignoring systems' vulnerabilities and focusing on their superior capabilities. This is detrimental when system failures are ignored. This paper investigates this mindless overreliance on systems, defined as automation bias (AB), in human-robot interaction. We conducted two experimental studies (N1 = 210, N2 = 438) with social robots in a corporate setting to investigate psychological mechanisms and influencing factors of AB. Particularly, users experience perceptual and behavioral AB with the robot that is enhanced by robot competence depending on task complexity and is even stronger for emotional than analytical tasks. Surprisingly, robot reliability negatively affected AB. We also found a negative indirect-only mediation of AB on robot satisfaction. Finally, we provide implications for the appropriate use of robots to prevent employees from using them as a self-sufficient system instead of a supporting system.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2023
Autor(en): Stock-Homburg, Ruth ; Nguyen, Mai Anh
Art des Eintrags: Bibliographie
Titel: See No Evil, Hear No Evil: How Users Blindly Overrely on Robots with Automation Bias
Sprache: Englisch
Publikationsjahr: 2023
Ort: Hyderabad
Veranstaltungstitel: Proceedings of Forty-Second International Conference on Information Systems
Veranstaltungsort: Hyderabad
Veranstaltungsdatum: 10.12.2023-13.12.2023
Kurzbeschreibung (Abstract):

Recent developments in generative artificial intelligence show how quickly users carelessly adhere to intelligent systems, ignoring systems' vulnerabilities and focusing on their superior capabilities. This is detrimental when system failures are ignored. This paper investigates this mindless overreliance on systems, defined as automation bias (AB), in human-robot interaction. We conducted two experimental studies (N1 = 210, N2 = 438) with social robots in a corporate setting to investigate psychological mechanisms and influencing factors of AB. Particularly, users experience perceptual and behavioral AB with the robot that is enhanced by robot competence depending on task complexity and is even stronger for emotional than analytical tasks. Surprisingly, robot reliability negatively affected AB. We also found a negative indirect-only mediation of AB on robot satisfaction. Finally, we provide implications for the appropriate use of robots to prevent employees from using them as a self-sufficient system instead of a supporting system.

Fachbereich(e)/-gebiet(e): 01 Fachbereich Rechts- und Wirtschaftswissenschaften
01 Fachbereich Rechts- und Wirtschaftswissenschaften > Betriebswirtschaftliche Fachgebiete
01 Fachbereich Rechts- und Wirtschaftswissenschaften > Betriebswirtschaftliche Fachgebiete > Fachgebiet Marketing & Personalmanagement
Hinterlegungsdatum: 11 Feb 2024 18:55
Letzte Änderung: 05 Jul 2024 07:43
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen