TU Darmstadt / ULB / TUbiblio

Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

Trick, Susanne ; Koert, Dorothea ; Peters, Jan ; Rothkopf, Constantin A. (2022)
Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction.
International Conference on Intelligent Robots and Systems (IROS). Macau, China (03.11.2019-08.11.2019)
doi: 10.26083/tuprints-00020552
Konferenzveröffentlichung, Zweitveröffentlichung, Postprint

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

Assistive robots can potentially improve the quality of life and personal independence of elderly people by supporting everyday life activities. To guarantee a safe and intuitive interaction between human and robot, human intentions need to be recognized automatically. As humans communicate their intentions multimodally, the use of multiple modalities for intention recognition may not just increase the robustness against failure of individual modalities but especially reduce the uncertainty about the intention to be recognized. This is desirable as particularly in direct interaction between robots and potentially vulnerable humans a minimal uncertainty about the situation as well as knowledge about this actual uncertainty is necessary. Thus, in contrast to existing methods, in this work a new approach for multimodal intention recognition is introduced that focuses on uncertainty reduction through classifier fusion. For the four considered modalities speech, gestures, gaze directions and scene objects individual intention classifiers are trained, all of which output a probability distribution over all possible intentions. By combining these output distributions using the Bayesian method Independent Opinion Pool [1] the uncertainty about the intention to be recognized can be decreased. The approach is evaluated in a collaborative human-robot interaction task with a 7-DoF robot arm. The results show that fused classifiers, which combine multiple modalities, outperform the respective individual base classifiers with respect to increased accuracy, robustness, and reduced uncertainty.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Trick, Susanne ; Koert, Dorothea ; Peters, Jan ; Rothkopf, Constantin A.
Art des Eintrags: Zweitveröffentlichung
Titel: Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction
Sprache: Englisch
Publikationsjahr: 2022
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: 2022
Verlag: IEEE
Buchtitel: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Kollation: 8 Seiten
Veranstaltungstitel: International Conference on Intelligent Robots and Systems (IROS)
Veranstaltungsort: Macau, China
Veranstaltungsdatum: 03.11.2019-08.11.2019
DOI: 10.26083/tuprints-00020552
URL / URN: https://tuprints.ulb.tu-darmstadt.de/20552
Zugehörige Links:
Herkunft: Zweitveröffentlichungsservice
Kurzbeschreibung (Abstract):

Assistive robots can potentially improve the quality of life and personal independence of elderly people by supporting everyday life activities. To guarantee a safe and intuitive interaction between human and robot, human intentions need to be recognized automatically. As humans communicate their intentions multimodally, the use of multiple modalities for intention recognition may not just increase the robustness against failure of individual modalities but especially reduce the uncertainty about the intention to be recognized. This is desirable as particularly in direct interaction between robots and potentially vulnerable humans a minimal uncertainty about the situation as well as knowledge about this actual uncertainty is necessary. Thus, in contrast to existing methods, in this work a new approach for multimodal intention recognition is introduced that focuses on uncertainty reduction through classifier fusion. For the four considered modalities speech, gestures, gaze directions and scene objects individual intention classifiers are trained, all of which output a probability distribution over all possible intentions. By combining these output distributions using the Bayesian method Independent Opinion Pool [1] the uncertainty about the intention to be recognized can be decreased. The approach is evaluated in a collaborative human-robot interaction task with a 7-DoF robot arm. The results show that fused classifiers, which combine multiple modalities, outperform the respective individual base classifiers with respect to increased accuracy, robustness, and reduced uncertainty.

Status: Postprint
URN: urn:nbn:de:tuda-tuprints-205520
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Intelligente Autonome Systeme
TU-Projekte: EC/H2020|640554|SKILLS4ROBOTS
Hinterlegungsdatum: 18 Nov 2022 14:15
Letzte Änderung: 21 Nov 2022 10:49
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen