TU Darmstadt / ULB / TUbiblio

Robust Real-Time Gesture Recognition Based on Hidden Conditional Random Fields

Wilbers, Daniel (2013)
Robust Real-Time Gesture Recognition Based on Hidden Conditional Random Fields.
Technische Universität Darmstadt
Bachelorarbeit, Bibliographie

Kurzbeschreibung (Abstract)

Using innovative input methods, such as speech commands and hand gestures, is of growing interest for controlling consumer devices. The currently available systems mostly provide a remote control as the main-way for controlling the device. In recent research the focus more often switches to providing a multi-modal system, meaning that can be controlled with various input methods. One contribution to such a system is the in this thesis examined aspect of using gestures as an input method Therefore, the aim is to investigate and develop a gesture recognition system, which is able to spot and classify a limited set of gesture within a continuous data stream. Due to that this thesis covers the two necessary aspects of a continuous gesture recognition system. To address the problem of spotting, which is detecting the start and end of a gesture, the use of a special activation posture is proposed, while the class of the gesture type is computed by a gesture classification module. For this, instead of using the well-known generative graphical model Hidden Markov Model (HMM), the gesture recognition module is based on the recently introduced discriminative graphical model Hidden Conditional Random Fields (HCRF), which in contrast to the HMM lacks the necessary independence of observations and hence performs generally better in human gesture recognition. The features used by the HCRF classification scheme are extracted from 3D skeleton joint data provided by the Microsoft Kinect SDK. This skeleton data only contains the upper body joints, which results from the typically seated position of the user and is in contrast to recently proposed gesture recognition modules. In addition two different gesture sets are explored and discussed. The evaluation shows that the proposed gesture spotting method achieved 91.38 accuracy, whereas the gesture classification with hidden conditional random fields best performed with up 98.33 accuracy.

Typ des Eintrags: Bachelorarbeit
Erschienen: 2013
Autor(en): Wilbers, Daniel
Art des Eintrags: Bibliographie
Titel: Robust Real-Time Gesture Recognition Based on Hidden Conditional Random Fields
Sprache: Englisch
Publikationsjahr: 2013
Kurzbeschreibung (Abstract):

Using innovative input methods, such as speech commands and hand gestures, is of growing interest for controlling consumer devices. The currently available systems mostly provide a remote control as the main-way for controlling the device. In recent research the focus more often switches to providing a multi-modal system, meaning that can be controlled with various input methods. One contribution to such a system is the in this thesis examined aspect of using gestures as an input method Therefore, the aim is to investigate and develop a gesture recognition system, which is able to spot and classify a limited set of gesture within a continuous data stream. Due to that this thesis covers the two necessary aspects of a continuous gesture recognition system. To address the problem of spotting, which is detecting the start and end of a gesture, the use of a special activation posture is proposed, while the class of the gesture type is computed by a gesture classification module. For this, instead of using the well-known generative graphical model Hidden Markov Model (HMM), the gesture recognition module is based on the recently introduced discriminative graphical model Hidden Conditional Random Fields (HCRF), which in contrast to the HMM lacks the necessary independence of observations and hence performs generally better in human gesture recognition. The features used by the HCRF classification scheme are extracted from 3D skeleton joint data provided by the Microsoft Kinect SDK. This skeleton data only contains the upper body joints, which results from the typically seated position of the user and is in contrast to recently proposed gesture recognition modules. In addition two different gesture sets are explored and discussed. The evaluation shows that the proposed gesture spotting method achieved 91.38 accuracy, whereas the gesture classification with hidden conditional random fields best performed with up 98.33 accuracy.

Freie Schlagworte: Business Field: Digital society, Research Area: Confluence of graphics and vision, Gesture recognition, Gesture based interaction, 3D Interaction, Probabilistic models, Computer vision, Realtime interaction
Zusätzliche Informationen:

55 p.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
Hinterlegungsdatum: 12 Nov 2018 11:16
Letzte Änderung: 12 Nov 2018 11:16
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen