Fernandes Veiga, Filipe (2018)
Towards Dexterous In-Hand Manipulation through Tactile Sensing.
Technische Universität Darmstadt
Dissertation, Erstveröffentlichung
Kurzbeschreibung (Abstract)
Currently, robots display manipulation capabilities that translate into actions such as picking and placing objects or poring liquid from containers. For actions that require finer in-hand manipulation to reposition objects or to use them as tools, robots are still not proficient enough. These shortcomings become even more apparent when considering the ease with which humans perform such manipulations on a daily basis, and while these limitations are not addressed, robots can not truly aid humans with their daily activities.
The scope of possible interactions and the high dimensionality intrinsic to more dexterous robotic hands makes the manipulation problem hard to approach. Traditional control approaches to dexterous in-hand manipulation often work under the assumption that physical interactions, object properties, kinematics and dynamics of the robot can all be accurately modeled. Unfortunately, these modeling assumptions do not hold in most real environments, as uncertainty accumulates along the individual models. On the other hand, developing learning approaches that would generalize for all necessary manipulations proves difficult, as state spaces composed of the robots degrees of freedom and the necessary feedback channels often becomes too high dimensional and hence hard to explore.
For dexterous in-hand manipulation, one of the most notable differences when considering human and robotic systems is the role of tactile information. The human system is composed of thousands of tactile afferents that provide detailed information of what is occurring at each interaction point during the manipulation action. On the other hand, traditional robotic manipulation approaches often rely on vision or on force feedback, either lacking information collected directly at the contact interaction or the various forms of information provided provided by tactile feedback.
In this thesis, we explore tactile sensing as a means to bridge the gap in manipulation skills between humans and robots. We do so by assessing how to extract relevant feedback signals from the high dimensional tactile spaces, by exploring how to distribute the manipulation problem complexity onto modular components and to use these components to enable the use of powerful machine learning approaches, without loss of generalization capabilities.
Chapter 2 will cover the recovery of relevant feedback signals from the tactile sensory information. Here, the desired signal is the state of the interaction between the robot and the object, particularly the knowledge of events such as slippage between the object and the finger surface. Through the use of machine learning, we predict such slip events in a manner that is generalizable to unknown objects. The ability to predict tactile slip allows analytically designed control solutions to stabilize objects when using a single finger. This is showed for cases where the opposing contact on the object is provided either by a static plane or by a human finger.
In Chapter 3, we explore and extend a neurophysiological research hypothesis to the robotics domain. This hypothesis states that for stabilizing objects in-hand, digits can be controlled independently from each other, with no form of explicit coordination. Taking full advantage of the predictive slip feedback signals and ensuring smooth control responses by each of the finger controllers, we show that a stable grips on unknown objects emerge while controlling each digit independently. We show that coordination is achieved through the perturbations observed via the tactile feedback of each individual finger.
Finally, in Chapter 4, use the modular nature of the grip stabilization control to enable the learning of manipulation policies in a hierarchical control setting. Reinforcement learning is used to learn a high-level control layer that exploits a lower-level composed of modular controllers that ensure the objects remains within the grip while being manipulated. In addition, we show that such a hierarchy facilitates the transfer of high-level policies learned in simulation onto real systems by using the low-level as an abstraction of the tactile information.
Typ des Eintrags: | Dissertation | ||||
---|---|---|---|---|---|
Erschienen: | 2018 | ||||
Autor(en): | Fernandes Veiga, Filipe | ||||
Art des Eintrags: | Erstveröffentlichung | ||||
Titel: | Towards Dexterous In-Hand Manipulation through Tactile Sensing | ||||
Sprache: | Englisch | ||||
Referenten: | Peters, Prof. Dr. Jan ; Santos, Prof. Veronica | ||||
Publikationsjahr: | 31 Juli 2018 | ||||
Ort: | Darmstadt | ||||
Datum der mündlichen Prüfung: | 22 Oktober 2018 | ||||
URL / URN: | https://tuprints.ulb.tu-darmstadt.de/9180 | ||||
Kurzbeschreibung (Abstract): | Currently, robots display manipulation capabilities that translate into actions such as picking and placing objects or poring liquid from containers. For actions that require finer in-hand manipulation to reposition objects or to use them as tools, robots are still not proficient enough. These shortcomings become even more apparent when considering the ease with which humans perform such manipulations on a daily basis, and while these limitations are not addressed, robots can not truly aid humans with their daily activities. The scope of possible interactions and the high dimensionality intrinsic to more dexterous robotic hands makes the manipulation problem hard to approach. Traditional control approaches to dexterous in-hand manipulation often work under the assumption that physical interactions, object properties, kinematics and dynamics of the robot can all be accurately modeled. Unfortunately, these modeling assumptions do not hold in most real environments, as uncertainty accumulates along the individual models. On the other hand, developing learning approaches that would generalize for all necessary manipulations proves difficult, as state spaces composed of the robots degrees of freedom and the necessary feedback channels often becomes too high dimensional and hence hard to explore. For dexterous in-hand manipulation, one of the most notable differences when considering human and robotic systems is the role of tactile information. The human system is composed of thousands of tactile afferents that provide detailed information of what is occurring at each interaction point during the manipulation action. On the other hand, traditional robotic manipulation approaches often rely on vision or on force feedback, either lacking information collected directly at the contact interaction or the various forms of information provided provided by tactile feedback. In this thesis, we explore tactile sensing as a means to bridge the gap in manipulation skills between humans and robots. We do so by assessing how to extract relevant feedback signals from the high dimensional tactile spaces, by exploring how to distribute the manipulation problem complexity onto modular components and to use these components to enable the use of powerful machine learning approaches, without loss of generalization capabilities. Chapter 2 will cover the recovery of relevant feedback signals from the tactile sensory information. Here, the desired signal is the state of the interaction between the robot and the object, particularly the knowledge of events such as slippage between the object and the finger surface. Through the use of machine learning, we predict such slip events in a manner that is generalizable to unknown objects. The ability to predict tactile slip allows analytically designed control solutions to stabilize objects when using a single finger. This is showed for cases where the opposing contact on the object is provided either by a static plane or by a human finger. In Chapter 3, we explore and extend a neurophysiological research hypothesis to the robotics domain. This hypothesis states that for stabilizing objects in-hand, digits can be controlled independently from each other, with no form of explicit coordination. Taking full advantage of the predictive slip feedback signals and ensuring smooth control responses by each of the finger controllers, we show that a stable grips on unknown objects emerge while controlling each digit independently. We show that coordination is achieved through the perturbations observed via the tactile feedback of each individual finger. Finally, in Chapter 4, use the modular nature of the grip stabilization control to enable the learning of manipulation policies in a hierarchical control setting. Reinforcement learning is used to learn a high-level control layer that exploits a lower-level composed of modular controllers that ensure the objects remains within the grip while being manipulated. In addition, we show that such a hierarchy facilitates the transfer of high-level policies learned in simulation onto real systems by using the low-level as an abstraction of the tactile information. |
||||
Alternatives oder übersetztes Abstract: |
|
||||
URN: | urn:nbn:de:tuda-tuprints-91802 | ||||
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik | ||||
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Intelligente Autonome Systeme |
||||
Hinterlegungsdatum: | 27 Okt 2019 20:55 | ||||
Letzte Änderung: | 27 Okt 2019 20:55 | ||||
PPN: | |||||
Referenten: | Peters, Prof. Dr. Jan ; Santos, Prof. Veronica | ||||
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: | 22 Oktober 2018 | ||||
Export: | |||||
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |