TU Darmstadt / ULB / TUbiblio

Visual-aided Selection of Reactive Elements in Intelligent Environments : Visuell gestützte Selektion reaktiver Elemente in intelligenten Umgebungen

Majewski, Martin (2012)
Visual-aided Selection of Reactive Elements in Intelligent Environments : Visuell gestützte Selektion reaktiver Elemente in intelligenten Umgebungen.
Technische Universität Darmstadt
Bachelorarbeit, Bibliographie

Kurzbeschreibung (Abstract)

Since the vision of the vanishing, ubiquitous computer was formulated in the 1990s, Intelligent Environments have become the main topic of many research efforts. Interacting with Intelligent Environments is preferably following the multi-modal interaction paradigm such as the notable research on natural interaction that allows communication through facial expressions, voice commands and gestures. Gestural interaction in terms of pointing for selection is the main focus of this thesis. Although being regarded as intuitive for the user, it leads to a significant offset between the user's intention and the system's interpretation. This offset makes interaction with reactive elements in Intelligent Environments unintuitive and hardly predictable if no guidance is provided to the user. This thesis shows the challenges during the pointing for selection process, including the drawbacks of current guiding systems and presents a concept for solving these challenges with a ubiquitous visual guiding system. This system supports marker-free, full-body gestural interaction in Intelligent Environments by providing a visual cue on the location the user is currently pointing at. We expect this system to place the users in a situation where they are able to correct their pointing themselves, without extensive training of user or machine. This results in a more accurate and intuitive selection of reactive elements in Intelligent Environments. A prototype system - the E.A.G.L.E. - was build to realize this concept using a robotic laser pointing system. A comparative evaluation with a group of 20 subjects was performed to confirm our expectations regarding the intention-to-interpretation offset and the effects of the self-correction process caused by the visual cue, resulting in a significant gain in accuracy.

Typ des Eintrags: Bachelorarbeit
Erschienen: 2012
Autor(en): Majewski, Martin
Art des Eintrags: Bibliographie
Titel: Visual-aided Selection of Reactive Elements in Intelligent Environments : Visuell gestützte Selektion reaktiver Elemente in intelligenten Umgebungen
Sprache: Englisch
Publikationsjahr: 2012
Kurzbeschreibung (Abstract):

Since the vision of the vanishing, ubiquitous computer was formulated in the 1990s, Intelligent Environments have become the main topic of many research efforts. Interacting with Intelligent Environments is preferably following the multi-modal interaction paradigm such as the notable research on natural interaction that allows communication through facial expressions, voice commands and gestures. Gestural interaction in terms of pointing for selection is the main focus of this thesis. Although being regarded as intuitive for the user, it leads to a significant offset between the user's intention and the system's interpretation. This offset makes interaction with reactive elements in Intelligent Environments unintuitive and hardly predictable if no guidance is provided to the user. This thesis shows the challenges during the pointing for selection process, including the drawbacks of current guiding systems and presents a concept for solving these challenges with a ubiquitous visual guiding system. This system supports marker-free, full-body gestural interaction in Intelligent Environments by providing a visual cue on the location the user is currently pointing at. We expect this system to place the users in a situation where they are able to correct their pointing themselves, without extensive training of user or machine. This results in a more accurate and intuitive selection of reactive elements in Intelligent Environments. A prototype system - the E.A.G.L.E. - was build to realize this concept using a robotic laser pointing system. A comparative evaluation with a group of 20 subjects was performed to confirm our expectations regarding the intention-to-interpretation offset and the effects of the self-correction process caused by the visual cue, resulting in a significant gain in accuracy.

Freie Schlagworte: Business Field: Digital society, Research Area: Confluence of graphics and vision, Gesture based interaction, Human-computer interaction (HCI), Feedback, Multimodal systems, Ambient intelligence (AmI)
Zusätzliche Informationen:

64 p.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
Hinterlegungsdatum: 12 Nov 2018 11:16
Letzte Änderung: 12 Nov 2018 11:16
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen