Prasad, Vignesh (2024)
Learning Human-Robot Interaction: A Case Study on Human-Robot Handshaking.
Technische Universität Darmstadt
doi: 10.26083/tuprints-00019025
Dissertation, Erstveröffentlichung, Verlagsversion
Kurzbeschreibung (Abstract)
For some years now, the use of humanoid social robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. Moreover, the act of handshaking, given its extended phase of physical contact allows one to convey complex emotions via the sense of touch. Giving an appropriate response, therefore, plays an important role in improving the naturalness of the interaction. Furthermore, having a timely response also yields a more natural interaction, where the robot is able to predict the human partner's movements and adapt its motion accordingly. Modelling the dynamics of such interactions is a key aspect of Human-Robot Interaction.
In this context, the main focus of this thesis is to understand how such a physically interactive behaviour affects an interaction with a social robot. The contributions of this thesis are as follows. We first perform a thorough analysis of existing works related to Human-Robot Handshaking exploring the modelling aspects for realising an effective handshakes and social aspects such as the acceptance of such behaviours, auxiliary elements, such as gaze or approach motions, human-likeness etc. We then incorporate these findings in a novel frameworks to realise a timely, adaptive and socially acceptable handshake on a humanoid social robot. We then explore how to extend this modularised form of learning towards a general framework for learning coordinated Human-Robot Interaction. We validate the effectiveness of the proposed frameworks through extensive experimental evaluations with human users who interact with a humanoid social robot equipped with our approaches.
As a first step, the existing state of Human-Robot Handshaking research is looked at and the works are categorised based on their focus areas. Following this, the major findings of these areas are drawn out and their pitfalls are analysed. It is mainly seen that synchronisation is key during the different phases of the interaction. Additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake along with internal factors like personality and mood which can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, possible ways forward for future research on such physically interactive behaviours are discussed.
In the case of handshaking and other similar physically interactive behaviours, having a timely response yields a more natural interaction, where the robot is able to predict the human partner's movements and adapt its motion accordingly. Modelling the dynamics of such interactions is a key aspect of Human-Robot Interaction. In this work, a framework is developed for robots to learn such interactions directly from human-human interactions, modular fashion by breaking down the interactions into their underlying segments and learning the sequencing between them. We do so using Hidden Markov Models to model the interaction dynamics via the latent embeddings learned by a Variational Autoencoder. We show how the interaction dynamics learned from Human-Human Interactions can help regularize the learning of robot trajectories and we explore the conditional generation of robot motions from human observations to enable learning suitable and accurate Human-Robot Interactions. We further explore how to adapt the generated motions for a spatially accurate and compliant handshaking behaviour, leading to a higher degree of acceptance by human users.
We further explore how the performance of the reactive motion generation can be improved by bridging the gap in the proposed framework by integrating the conditioning of the HMMs into the VAEs in a more principled manner. To this end, we demonstrate how Mixture Density Networks yield themselves as an extension of the underlying HMM conditioning. Such a structure inherently allows the model to capture the complex and multimodal nature of human behavior. We demonstrate how the proposed framework can enhance the prediction of the reactive motion generation by learning multiple latent policies which when combined enable the generation of more accurate interactions.
To summarise, the goals of this thesis are: (i) to further investigate the act of handshaking in the scope of physical Human-Robot Interactions, (ii) to develop a framework that can learn a library of such physically interactive behaviours to widen the social skills of a robot and (iii) to explore how the accuracy of generating realistic and natural interactive behaviors can be improved.
Typ des Eintrags: | Dissertation | ||||
---|---|---|---|---|---|
Erschienen: | 2024 | ||||
Autor(en): | Prasad, Vignesh | ||||
Art des Eintrags: | Erstveröffentlichung | ||||
Titel: | Learning Human-Robot Interaction: A Case Study on Human-Robot Handshaking | ||||
Sprache: | Englisch | ||||
Referenten: | Peters, Prof. Dr. Jan ; Stock-Homburg, Prof. Dr. Ruth ; Hu, Prof. Dr. Yue | ||||
Publikationsjahr: | 28 Oktober 2024 | ||||
Ort: | Darmstadt | ||||
Kollation: | xxi, 118 Seiten | ||||
Datum der mündlichen Prüfung: | 1 Dezember 2023 | ||||
DOI: | 10.26083/tuprints-00019025 | ||||
URL / URN: | https://tuprints.ulb.tu-darmstadt.de/19025 | ||||
Kurzbeschreibung (Abstract): | For some years now, the use of humanoid social robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. Moreover, the act of handshaking, given its extended phase of physical contact allows one to convey complex emotions via the sense of touch. Giving an appropriate response, therefore, plays an important role in improving the naturalness of the interaction. Furthermore, having a timely response also yields a more natural interaction, where the robot is able to predict the human partner's movements and adapt its motion accordingly. Modelling the dynamics of such interactions is a key aspect of Human-Robot Interaction. In this context, the main focus of this thesis is to understand how such a physically interactive behaviour affects an interaction with a social robot. The contributions of this thesis are as follows. We first perform a thorough analysis of existing works related to Human-Robot Handshaking exploring the modelling aspects for realising an effective handshakes and social aspects such as the acceptance of such behaviours, auxiliary elements, such as gaze or approach motions, human-likeness etc. We then incorporate these findings in a novel frameworks to realise a timely, adaptive and socially acceptable handshake on a humanoid social robot. We then explore how to extend this modularised form of learning towards a general framework for learning coordinated Human-Robot Interaction. We validate the effectiveness of the proposed frameworks through extensive experimental evaluations with human users who interact with a humanoid social robot equipped with our approaches. As a first step, the existing state of Human-Robot Handshaking research is looked at and the works are categorised based on their focus areas. Following this, the major findings of these areas are drawn out and their pitfalls are analysed. It is mainly seen that synchronisation is key during the different phases of the interaction. Additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake along with internal factors like personality and mood which can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, possible ways forward for future research on such physically interactive behaviours are discussed. In the case of handshaking and other similar physically interactive behaviours, having a timely response yields a more natural interaction, where the robot is able to predict the human partner's movements and adapt its motion accordingly. Modelling the dynamics of such interactions is a key aspect of Human-Robot Interaction. In this work, a framework is developed for robots to learn such interactions directly from human-human interactions, modular fashion by breaking down the interactions into their underlying segments and learning the sequencing between them. We do so using Hidden Markov Models to model the interaction dynamics via the latent embeddings learned by a Variational Autoencoder. We show how the interaction dynamics learned from Human-Human Interactions can help regularize the learning of robot trajectories and we explore the conditional generation of robot motions from human observations to enable learning suitable and accurate Human-Robot Interactions. We further explore how to adapt the generated motions for a spatially accurate and compliant handshaking behaviour, leading to a higher degree of acceptance by human users. We further explore how the performance of the reactive motion generation can be improved by bridging the gap in the proposed framework by integrating the conditioning of the HMMs into the VAEs in a more principled manner. To this end, we demonstrate how Mixture Density Networks yield themselves as an extension of the underlying HMM conditioning. Such a structure inherently allows the model to capture the complex and multimodal nature of human behavior. We demonstrate how the proposed framework can enhance the prediction of the reactive motion generation by learning multiple latent policies which when combined enable the generation of more accurate interactions. To summarise, the goals of this thesis are: (i) to further investigate the act of handshaking in the scope of physical Human-Robot Interactions, (ii) to develop a framework that can learn a library of such physically interactive behaviours to widen the social skills of a robot and (iii) to explore how the accuracy of generating realistic and natural interactive behaviors can be improved. |
||||
Alternatives oder übersetztes Abstract: |
|
||||
Status: | Verlagsversion | ||||
URN: | urn:nbn:de:tuda-tuprints-190254 | ||||
Zusätzliche Informationen: | In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Technical University of Darmstadt’s products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink. If applicable, University Microfilms and/or ProQuest Library, or the Archives of Canada may supply single copies of the dissertation |
||||
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik 600 Technik, Medizin, angewandte Wissenschaften > 600 Technik 600 Technik, Medizin, angewandte Wissenschaften > 620 Ingenieurwissenschaften und Maschinenbau |
||||
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Intelligente Autonome Systeme |
||||
Hinterlegungsdatum: | 28 Okt 2024 13:11 | ||||
Letzte Änderung: | 29 Okt 2024 13:39 | ||||
PPN: | |||||
Referenten: | Peters, Prof. Dr. Jan ; Stock-Homburg, Prof. Dr. Ruth ; Hu, Prof. Dr. Yue | ||||
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: | 1 Dezember 2023 | ||||
Export: | |||||
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |