Lissermann, Roman (2014)
Novel Devices and Interaction Concepts for Fluid Collaboration.
Technische Universität Darmstadt
Dissertation, Erstveröffentlichung
Kurzbeschreibung (Abstract)
This thesis addresses computer-augmented collaborative work. More precisely, it focuses on co-located collaboration where co-workers get together at the same place, usually a meeting room. We assume co-workers to use both mobile devices (i.e. hand-held devices) and a static device (i.e., interactive table). These devices provide multiple output modalities, such as visual output and sound output. The co-workers are assumed to process digital content (e.g., document, videos etc.).
According to both common experience and scientific evidence, co-workers often switch between rather individual, self directed work and tightly shared group work; these working styles are denoted as loose and tight collaboration, respectively. The overarching goal of this thesis is to better support seamless transitions between loose and tight collaboration, denoted as fluid collaboration. In order to support such fluid transitions between the two working styles, we have to reflect and mitigate conflicting requirements for both output modalities. In tight collaboration, co-workers appreciate proximity and equal access to content; both workspaces and content are shared. In loose collaboration, co-workers desire sufficient space of their own and minimal interference of their contents and interaction. It was shown that in conventional settings (e.g., interactive tables), a transition between tight and loose collaboration leads to limited personal workspace and thereby to workspace interference, clutter and other constraints. During collaboration, such interference concerns both visual and sound output.
In light of these facts, further research on interactive devices (e.g., interactive tables and mobile devices) is needed to support fluid collaboration with different output modalities. These observations lead to the central research question of this thesis: How to support fluid co-located collaboration using visual and sound content? This thesis explores this question in three main research directions: (1) surface-based interaction, (2) spatial interaction and (3) embodied sound interaction, while (1) and (2) address visual content, (3) focuses on auditory content. In each direction, we conceptualized, implemented, and evaluated a set of device concepts plus corresponding interaction concepts, respectively. The first research direction, Surface-Based Interaction, contributes a novel tabletop, called Permulin, that provides (1) a group view providing a common ground during phases of tight collaboration, (2) private full screen views for each collaborator to scaffold loosely coupled collaboration and (3) interaction and visualization techniques for sharing content in-between these views for coordination and mutual awareness. Results from an exploratory study and from a controlled experiment provide evidence for the following advancements: (1) Permulin supports fluid collaboration by allowing the user to transition fluidly between loose and tight collaboration. (2) Users perceive and use Permulin as both a cooperative and an individual device. Amongst others, this is reflected by participants occupying significantly larger interaction areas on Permulin than on a tabletop system. (3) Permulin provides unique awareness properties: participants were highly aware of each other and of their interactions during tightly coupled collaboration, while being able to unobtrusively perform individual work during loosely coupled collaboration. In the second research direction, Spatial Interaction, we simulate future paper-like display devices and investigate how well-known collaboration and interaction techniques with paper documents can be transferred to the field of video navigation based on such devices. Thereby we contribute a device concept and interaction techniques that allows multiple users to collaboratively process collections of videos on multiple paper-like displays. It enables users to navigate in video collections, create an overview of multiple videos, and structure and organize video contents. The proposed approach, coined as CoPaperVideo, leverages physical arrangement of the devices. Results of two user studies indicate that our spatial interaction concepts allow users to flexibly organize and structure multiple videos in physical space and to easily and seamlessly transition between individual and group work. In addition, the spatial interaction concepts leverage the 3D space for interaction and for mitigating space limitations.
The first two research directions contribute novel devices and interaction concepts for visual content. Visual interfaces are particularly suitable for collaboration, because they afford direct manipulation of visual content. However, while current devices support both visual and sound output, there is still a lack of suitable devices and interaction concepts for a collaborative direct manipulation of sound content. Hence, the third research direction, Embodied Sound Interaction, explores novel devices and interaction concepts for direct manipulation of sound for fluid collaboration. First, we contribute interfaces that enable users to control sound individually by means of body-based interaction. The concept focuses on the body part where sound is perceived: a user’s own ear. Second, direct manipulation of sound is supported through spatial control of sound sources. Virtual sound sources are situated in 3D space and physically associated with spatially aware paper-like displays that embed videos. By physically moving these displays, each user can then control - and focus on - multiple sound sources individually or collaboratively. The evaluation supports our hypothesis that our embodied sound interaction concepts provide effective sound support for users during fluid collaboration.
Typ des Eintrags: | Dissertation | ||||
---|---|---|---|---|---|
Erschienen: | 2014 | ||||
Autor(en): | Lissermann, Roman | ||||
Art des Eintrags: | Erstveröffentlichung | ||||
Titel: | Novel Devices and Interaction Concepts for Fluid Collaboration | ||||
Sprache: | Englisch | ||||
Referenten: | Mühlhäuser, Prof. Dr. Max ; Nanayakkara, Prof. Dr. Suranga | ||||
Publikationsjahr: | 29 September 2014 | ||||
Ort: | Darmstadt | ||||
Datum der mündlichen Prüfung: | 4 September 2014 | ||||
URL / URN: | http://tuprints.ulb.tu-darmstadt.de/4167 | ||||
Kurzbeschreibung (Abstract): | This thesis addresses computer-augmented collaborative work. More precisely, it focuses on co-located collaboration where co-workers get together at the same place, usually a meeting room. We assume co-workers to use both mobile devices (i.e. hand-held devices) and a static device (i.e., interactive table). These devices provide multiple output modalities, such as visual output and sound output. The co-workers are assumed to process digital content (e.g., document, videos etc.). According to both common experience and scientific evidence, co-workers often switch between rather individual, self directed work and tightly shared group work; these working styles are denoted as loose and tight collaboration, respectively. The overarching goal of this thesis is to better support seamless transitions between loose and tight collaboration, denoted as fluid collaboration. In order to support such fluid transitions between the two working styles, we have to reflect and mitigate conflicting requirements for both output modalities. In tight collaboration, co-workers appreciate proximity and equal access to content; both workspaces and content are shared. In loose collaboration, co-workers desire sufficient space of their own and minimal interference of their contents and interaction. It was shown that in conventional settings (e.g., interactive tables), a transition between tight and loose collaboration leads to limited personal workspace and thereby to workspace interference, clutter and other constraints. During collaboration, such interference concerns both visual and sound output. In light of these facts, further research on interactive devices (e.g., interactive tables and mobile devices) is needed to support fluid collaboration with different output modalities. These observations lead to the central research question of this thesis: How to support fluid co-located collaboration using visual and sound content? This thesis explores this question in three main research directions: (1) surface-based interaction, (2) spatial interaction and (3) embodied sound interaction, while (1) and (2) address visual content, (3) focuses on auditory content. In each direction, we conceptualized, implemented, and evaluated a set of device concepts plus corresponding interaction concepts, respectively. The first research direction, Surface-Based Interaction, contributes a novel tabletop, called Permulin, that provides (1) a group view providing a common ground during phases of tight collaboration, (2) private full screen views for each collaborator to scaffold loosely coupled collaboration and (3) interaction and visualization techniques for sharing content in-between these views for coordination and mutual awareness. Results from an exploratory study and from a controlled experiment provide evidence for the following advancements: (1) Permulin supports fluid collaboration by allowing the user to transition fluidly between loose and tight collaboration. (2) Users perceive and use Permulin as both a cooperative and an individual device. Amongst others, this is reflected by participants occupying significantly larger interaction areas on Permulin than on a tabletop system. (3) Permulin provides unique awareness properties: participants were highly aware of each other and of their interactions during tightly coupled collaboration, while being able to unobtrusively perform individual work during loosely coupled collaboration. In the second research direction, Spatial Interaction, we simulate future paper-like display devices and investigate how well-known collaboration and interaction techniques with paper documents can be transferred to the field of video navigation based on such devices. Thereby we contribute a device concept and interaction techniques that allows multiple users to collaboratively process collections of videos on multiple paper-like displays. It enables users to navigate in video collections, create an overview of multiple videos, and structure and organize video contents. The proposed approach, coined as CoPaperVideo, leverages physical arrangement of the devices. Results of two user studies indicate that our spatial interaction concepts allow users to flexibly organize and structure multiple videos in physical space and to easily and seamlessly transition between individual and group work. In addition, the spatial interaction concepts leverage the 3D space for interaction and for mitigating space limitations. The first two research directions contribute novel devices and interaction concepts for visual content. Visual interfaces are particularly suitable for collaboration, because they afford direct manipulation of visual content. However, while current devices support both visual and sound output, there is still a lack of suitable devices and interaction concepts for a collaborative direct manipulation of sound content. Hence, the third research direction, Embodied Sound Interaction, explores novel devices and interaction concepts for direct manipulation of sound for fluid collaboration. First, we contribute interfaces that enable users to control sound individually by means of body-based interaction. The concept focuses on the body part where sound is perceived: a user’s own ear. Second, direct manipulation of sound is supported through spatial control of sound sources. Virtual sound sources are situated in 3D space and physically associated with spatially aware paper-like displays that embed videos. By physically moving these displays, each user can then control - and focus on - multiple sound sources individually or collaboratively. The evaluation supports our hypothesis that our embodied sound interaction concepts provide effective sound support for users during fluid collaboration. |
||||
Alternatives oder übersetztes Abstract: |
|
||||
Freie Schlagworte: | Human-Computer Interaction, Interaction Design, Collaboration, Mixed-focus Collaboration, Individual Work, Group Work, User Interface, User Experience, Input Devices, Input Strategies, Interaction Styles, Optical Tracking, Paper-like Displays, Tangible User Interface, Electronic Paper, Flexible Display, Thin-film Display, Multiple Displays, Video Browsing, Video Navigation, Tabletop, Interactive Surfaces, Multi-view, Collaborative Coupling Styles, Multi Touch, Personal Input, Ear-based Interaction, Wearable Devices, Ear-worn, Mobile Interaction, Eyes-free, Device Augmentation, On-body Interaction, Spatial Sound, 3D Sound | ||||
URN: | urn:nbn:de:tuda-tuprints-41673 | ||||
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik | ||||
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik | ||||
Hinterlegungsdatum: | 12 Okt 2014 19:55 | ||||
Letzte Änderung: | 12 Okt 2014 19:55 | ||||
PPN: | |||||
Referenten: | Mühlhäuser, Prof. Dr. Max ; Nanayakkara, Prof. Dr. Suranga | ||||
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: | 4 September 2014 | ||||
Export: | |||||
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |