Rieger, Thomas ; Braun, Norbert (2003)
Narrative Use of Sign Language by a Virtual Character for the Hearing Impaired.
In: Computer Graphics Forum, 22 (3)
Artikel, Bibliographie
Kurzbeschreibung (Abstract)
This paper describes the concept and control of a 3d virtual character system with facial expressions and gestures as a conversational user interface with narrative expressiveness for the hearing impaired. The gestures and facial expressions are based on morphing techniques. The system allows the generation of sign language and mouth motion in real time from text at the level of lip reading quality. The concept of Narrative Extended Speech Acts (NESA) is introduced, based on Interactive Storytelling techniques and the concepts of Narrative Conflict and Suspense Progression. We define a choice of annotation tags to be used with NESAs. We use the NESAs to classify conversation fragments and to enhance computer generated sign language. We note, how the sign language gestures are generated and show the possibilities for editing sign language gestures. Furthermore, we give details on how the NESAs are mapped to gestures. We show the possibilities of controlling the virtual character's behaviour and gestures in a human-oriented way and provide an outlook on future work.
Typ des Eintrags: | Artikel |
---|---|
Erschienen: | 2003 |
Autor(en): | Rieger, Thomas ; Braun, Norbert |
Art des Eintrags: | Bibliographie |
Titel: | Narrative Use of Sign Language by a Virtual Character for the Hearing Impaired |
Sprache: | Englisch |
Publikationsjahr: | 2003 |
Titel der Zeitschrift, Zeitung oder Schriftenreihe: | Computer Graphics Forum |
Jahrgang/Volume einer Zeitschrift: | 22 |
(Heft-)Nummer: | 3 |
Kurzbeschreibung (Abstract): | This paper describes the concept and control of a 3d virtual character system with facial expressions and gestures as a conversational user interface with narrative expressiveness for the hearing impaired. The gestures and facial expressions are based on morphing techniques. The system allows the generation of sign language and mouth motion in real time from text at the level of lip reading quality. The concept of Narrative Extended Speech Acts (NESA) is introduced, based on Interactive Storytelling techniques and the concepts of Narrative Conflict and Suspense Progression. We define a choice of annotation tags to be used with NESAs. We use the NESAs to classify conversation fragments and to enhance computer generated sign language. We note, how the sign language gestures are generated and show the possibilities for editing sign language gestures. Furthermore, we give details on how the NESAs are mapped to gestures. We show the possibilities of controlling the virtual character's behaviour and gestures in a human-oriented way and provide an outlook on future work. |
Freie Schlagworte: | Virtual characters, Virtual narrators, Narrative intelligence, Avatar behavior, Story engine |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Graphisch-Interaktive Systeme |
Hinterlegungsdatum: | 16 Apr 2018 09:04 |
Letzte Änderung: | 16 Apr 2018 09:04 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |