Hoppe, David (2019)
Eye movements in dynamic environments.
Technische Universität Darmstadt
Dissertation, Erstveröffentlichung
Kurzbeschreibung (Abstract)
The capabilities of the visual system and the biological mechanisms controlling its active nature are still unequaled by modern technology. Despite the spatial and temporal complexity of our environment, we succeed in tasks that demand extracting relevant information from complex, ambiguous, and noisy sensory data. Dynamically distributing visual attention across multiple targets is an important task. In many situations, for example driving a vehicle, switching focus between several targets (e.g., looking ahead, mirrors, control panels) is needed to succeed. This is further complicated by the fact, that most information gathered during active gaze is highly dynamic (e.g., other vehicles on the street, changes of street direction). Hence, while looking at one of the targets, the uncertainty regarding the others increases. Crucially, we manage to do so despite omnipresent stochastic changes in our surroundings. The mechanisms responsible for how the brain schedules our visual system to access the information we need exactly when we need it are far from understood. In a dynamic world, humans not only have to decide where to look but also when to direct their gaze to potentially informative locations in the visual scene. Our foveated visual apparatus is only capable of gathering information with high resolution within a limited area of the visual field. As a consequence, in a changing environment, we constantly and inevitably lose information about the locations not currently brought into focus. Little is known about how the timing of eye movements is related to environmental regularities and how gaze strategies are learned. This is due to three main reasons: First, to relate the scheduling of eye movements to stochastic environmental dynamics, we need to have access to those statistics. However, these are usually unknown. Second, to apply the powerful framework of statistical learning theory, we require knowledge of the current goals of the subject. During every-day tasks, the goal structure can be complex, multi-dimensional and is only partially accessible. Third, the computational problem is, in general, intractable. Usually, it involves learning sequences of eye movements rather than a single action from delayed rewards under temporal and spatial uncertainty that is further amplified by dynamic changes in the environment. In the present thesis, we propose an experimental paradigm specifically designed to target these problems: First, we use simple stimuli with reduced spatial complexity and controlled stochastic behavior. Second, we give subjects explicit task instructions. Finally, the temporal and spatial statistics are designed in a way, that significantly simplifies computation and makes it possible to infer several human properties from the action sequences while still using normative models for behavior. We present results from four different studies that show how this approach can be used to gain insights into the temporal structure of human gaze selection. In a controlled setting in which crucial quantities are known, we show how environmental dynamics are learned and used to control several components of the visual apparatus by properly scheduling the time course of actions.
First, we investigated how endogenous eye blinks are controlled in the presence of nonstationary environmental demands. Eye blinks are linked to dopamine and therefore have been used as a behavioral marker for many internal cognitive processes. Also, they introduce gaps in the stream of visual information. Empirical results had suggested that 1) blinking behavior is affected by the current activity and 2) highly variable between participants. We present a computational approach that quantifies the relationship between blinking behavior and environmental demands. We show that blinking is the result of a trade-off between task demands and the internal urge to blink in our psychophysical experiment. Crucially, we can predict the temporal dynamics of blinking (i.e., the distribution of interblink intervals) for individual blinking patterns. Second, we present behavioral data establishing that humans learn to adjust their temporal eye movements efficiently. More time is spent at locations where meaningful events are short and therefore easily missed. Our computational model further shows how several properties of the visual system determine the timing of gaze. We present a Bayesian learner that fully explains how eye movement patterns change due to learning the event statistics. Thus, humans use temporal regularities learned from observations to adjust the scheduling of eye movements in a nearly optimal way. This is a first computational account towards understanding how eye movements are scheduled in natural behavior. After establishing the connection of temporal eye movement dynamics, reward in the form of task performance, and physiological costs for saccades and endogenous eye blinks, we applied our paradigm to study the variability in temporal eye movement sequences within and across subjects. The experimental design facilitates analyzing the temporal structure of eye movementswith full knowledge about the statistics of the environment. Hence, we can quantify the internal beliefs about task-relevant properties and can further study how they contribute to the variability in gaze sequences in combination with physiological costs. Crucially, we developed a visual monitoring task where a subject is confronted with the same stimulus dynamics multiple times while learning effects are kept to a minimum. Hence, we are not only able to compute the variability between subjects but also over trials of the same subject. We present behavioral data and results from our computational model showing how variability of eye movement sequences is related to task properties. Having access to the subjects' reward structure, we are able to show how expected rewards influence the variance in visual behavior. Finally, we studied the computational properties underlying the control of eye movement sequences in a visual search task. In particular, we investigated whether eye movements are planned. Research from psychology has merely revealed that sequences of multiple eye movements are jointly prepared as a scanpath. Here we examine whether humans are capable of finding the optimal scanpath even if it requires incorporating more than just the next eye movement into the decision. For a visual search task, we derive an ideal observer as well as an ideal planner based on the framework of partially observable Markov decision processes (POMDP). The former always takes the action associated with the maximum immediate reward while the latter maximized the total sum of rewards for the whole action sequence. We show that depending on the search shape ideal planner and ideal observer lead to different scanpaths. Following this paradigm, we found evidence that humans are indeed capable of planning scanpaths. The ideal planner explained our subjects' behavior better compared to the ideal observer. In particular, the location of the first fixation differed depending on the shape and the time available for the search, a characteristic well predicted by the ideal planner but not by the ideal observer. Overall, our results are the first evidence that our visual system is capable of taking into account future consequences beyond the immediate reward for choosing the next fixation target. In summary, this thesis proposes an experimental paradigm that enables us to study the temporal structure of eye movements in dynamic environments. While approaching this computationally is generally intractable, we reduce the complexity of the stimuli in dimensions that do not contribute to the temporal effects. As a consequence, we can collect eye movement data in tasks with a rich temporal structure while being able to compute the internal beliefs of our subjects in a way that is not possible for natural stimuli. We present four different studies that show how this paradigm can lead to new insights into several properties of the visual system. Our findings have several implications for future work: First, we established several factors that play a crucial role in the generation of gaze behavior and have to be accounted for when describing the temporal dynamics of eye movements. Second, future models of eye movements should take into account, that delayed rewards can affect behavior. Third, the relationship between behavioral variability and properties of the reward structure are not limited to eye movements. Instead, it is a general prediction by the computational framework. Therefore, future work can use this approach to study the variability of various other actions. Our computational models have applications in state of the art technology. For example, blink rates are already utilized in vigilance systems for drivers. Our computational model is able to describe the temporal statistics of blinking behavior beyond simple blink rates and also accounts for interindividual differences in eye physiology. Using algorithms that can deal with natural images, e.g., deep neural networks, the environmental statistics can be extracted and our models then can be used to predict eye movements in daily situations like driving a vehicle.
Typ des Eintrags: | Dissertation | ||||
---|---|---|---|---|---|
Erschienen: | 2019 | ||||
Autor(en): | Hoppe, David | ||||
Art des Eintrags: | Erstveröffentlichung | ||||
Titel: | Eye movements in dynamic environments | ||||
Sprache: | Englisch | ||||
Referenten: | Rothkopf, Prof. Constantin A. ; Lengyel, Prof. Mate | ||||
Publikationsjahr: | 2019 | ||||
Ort: | Darmstadt | ||||
Datum der mündlichen Prüfung: | 2 Mai 2019 | ||||
URL / URN: | https://tuprints.ulb.tu-darmstadt.de/8817 | ||||
Kurzbeschreibung (Abstract): | The capabilities of the visual system and the biological mechanisms controlling its active nature are still unequaled by modern technology. Despite the spatial and temporal complexity of our environment, we succeed in tasks that demand extracting relevant information from complex, ambiguous, and noisy sensory data. Dynamically distributing visual attention across multiple targets is an important task. In many situations, for example driving a vehicle, switching focus between several targets (e.g., looking ahead, mirrors, control panels) is needed to succeed. This is further complicated by the fact, that most information gathered during active gaze is highly dynamic (e.g., other vehicles on the street, changes of street direction). Hence, while looking at one of the targets, the uncertainty regarding the others increases. Crucially, we manage to do so despite omnipresent stochastic changes in our surroundings. The mechanisms responsible for how the brain schedules our visual system to access the information we need exactly when we need it are far from understood. In a dynamic world, humans not only have to decide where to look but also when to direct their gaze to potentially informative locations in the visual scene. Our foveated visual apparatus is only capable of gathering information with high resolution within a limited area of the visual field. As a consequence, in a changing environment, we constantly and inevitably lose information about the locations not currently brought into focus. Little is known about how the timing of eye movements is related to environmental regularities and how gaze strategies are learned. This is due to three main reasons: First, to relate the scheduling of eye movements to stochastic environmental dynamics, we need to have access to those statistics. However, these are usually unknown. Second, to apply the powerful framework of statistical learning theory, we require knowledge of the current goals of the subject. During every-day tasks, the goal structure can be complex, multi-dimensional and is only partially accessible. Third, the computational problem is, in general, intractable. Usually, it involves learning sequences of eye movements rather than a single action from delayed rewards under temporal and spatial uncertainty that is further amplified by dynamic changes in the environment. In the present thesis, we propose an experimental paradigm specifically designed to target these problems: First, we use simple stimuli with reduced spatial complexity and controlled stochastic behavior. Second, we give subjects explicit task instructions. Finally, the temporal and spatial statistics are designed in a way, that significantly simplifies computation and makes it possible to infer several human properties from the action sequences while still using normative models for behavior. We present results from four different studies that show how this approach can be used to gain insights into the temporal structure of human gaze selection. In a controlled setting in which crucial quantities are known, we show how environmental dynamics are learned and used to control several components of the visual apparatus by properly scheduling the time course of actions. First, we investigated how endogenous eye blinks are controlled in the presence of nonstationary environmental demands. Eye blinks are linked to dopamine and therefore have been used as a behavioral marker for many internal cognitive processes. Also, they introduce gaps in the stream of visual information. Empirical results had suggested that 1) blinking behavior is affected by the current activity and 2) highly variable between participants. We present a computational approach that quantifies the relationship between blinking behavior and environmental demands. We show that blinking is the result of a trade-off between task demands and the internal urge to blink in our psychophysical experiment. Crucially, we can predict the temporal dynamics of blinking (i.e., the distribution of interblink intervals) for individual blinking patterns. Second, we present behavioral data establishing that humans learn to adjust their temporal eye movements efficiently. More time is spent at locations where meaningful events are short and therefore easily missed. Our computational model further shows how several properties of the visual system determine the timing of gaze. We present a Bayesian learner that fully explains how eye movement patterns change due to learning the event statistics. Thus, humans use temporal regularities learned from observations to adjust the scheduling of eye movements in a nearly optimal way. This is a first computational account towards understanding how eye movements are scheduled in natural behavior. After establishing the connection of temporal eye movement dynamics, reward in the form of task performance, and physiological costs for saccades and endogenous eye blinks, we applied our paradigm to study the variability in temporal eye movement sequences within and across subjects. The experimental design facilitates analyzing the temporal structure of eye movementswith full knowledge about the statistics of the environment. Hence, we can quantify the internal beliefs about task-relevant properties and can further study how they contribute to the variability in gaze sequences in combination with physiological costs. Crucially, we developed a visual monitoring task where a subject is confronted with the same stimulus dynamics multiple times while learning effects are kept to a minimum. Hence, we are not only able to compute the variability between subjects but also over trials of the same subject. We present behavioral data and results from our computational model showing how variability of eye movement sequences is related to task properties. Having access to the subjects' reward structure, we are able to show how expected rewards influence the variance in visual behavior. Finally, we studied the computational properties underlying the control of eye movement sequences in a visual search task. In particular, we investigated whether eye movements are planned. Research from psychology has merely revealed that sequences of multiple eye movements are jointly prepared as a scanpath. Here we examine whether humans are capable of finding the optimal scanpath even if it requires incorporating more than just the next eye movement into the decision. For a visual search task, we derive an ideal observer as well as an ideal planner based on the framework of partially observable Markov decision processes (POMDP). The former always takes the action associated with the maximum immediate reward while the latter maximized the total sum of rewards for the whole action sequence. We show that depending on the search shape ideal planner and ideal observer lead to different scanpaths. Following this paradigm, we found evidence that humans are indeed capable of planning scanpaths. The ideal planner explained our subjects' behavior better compared to the ideal observer. In particular, the location of the first fixation differed depending on the shape and the time available for the search, a characteristic well predicted by the ideal planner but not by the ideal observer. Overall, our results are the first evidence that our visual system is capable of taking into account future consequences beyond the immediate reward for choosing the next fixation target. In summary, this thesis proposes an experimental paradigm that enables us to study the temporal structure of eye movements in dynamic environments. While approaching this computationally is generally intractable, we reduce the complexity of the stimuli in dimensions that do not contribute to the temporal effects. As a consequence, we can collect eye movement data in tasks with a rich temporal structure while being able to compute the internal beliefs of our subjects in a way that is not possible for natural stimuli. We present four different studies that show how this paradigm can lead to new insights into several properties of the visual system. Our findings have several implications for future work: First, we established several factors that play a crucial role in the generation of gaze behavior and have to be accounted for when describing the temporal dynamics of eye movements. Second, future models of eye movements should take into account, that delayed rewards can affect behavior. Third, the relationship between behavioral variability and properties of the reward structure are not limited to eye movements. Instead, it is a general prediction by the computational framework. Therefore, future work can use this approach to study the variability of various other actions. Our computational models have applications in state of the art technology. For example, blink rates are already utilized in vigilance systems for drivers. Our computational model is able to describe the temporal statistics of blinking behavior beyond simple blink rates and also accounts for interindividual differences in eye physiology. Using algorithms that can deal with natural images, e.g., deep neural networks, the environmental statistics can be extracted and our models then can be used to predict eye movements in daily situations like driving a vehicle. |
||||
Alternatives oder übersetztes Abstract: |
|
||||
URN: | urn:nbn:de:tuda-tuprints-88171 | ||||
Sachgruppe der Dewey Dezimalklassifikatin (DDC): | 100 Philosophie und Psychologie > 150 Psychologie | ||||
Fachbereich(e)/-gebiet(e): | 03 Fachbereich Humanwissenschaften 03 Fachbereich Humanwissenschaften > Institut für Psychologie 03 Fachbereich Humanwissenschaften > Institut für Psychologie > Psychologie der Informationsverarbeitung |
||||
Hinterlegungsdatum: | 30 Jun 2019 19:55 | ||||
Letzte Änderung: | 30 Jun 2019 19:55 | ||||
PPN: | |||||
Referenten: | Rothkopf, Prof. Constantin A. ; Lengyel, Prof. Mate | ||||
Datum der mündlichen Prüfung / Verteidigung / mdl. Prüfung: | 2 Mai 2019 | ||||
Export: | |||||
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |