TU Darmstadt / ULB / TUbiblio

Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation

Ryali, Chaitanya ; Reddy, Gautam ; Yu, Angela J (2018)
Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation.
Advances in Neural Information Processing Systems 31. Palais des Congrès de Montréal (02.12. - 08.12.2018)
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Understanding how humans and animals learn about statistical regularities in stable and volatile environments, and utilize these regularities to make predictions and decisions, is an important problem in neuroscience and psychology. Using a Bayesian modeling framework, specifically the Dynamic Belief Model (DBM), it has previously been shown that humans tend to make the {\textbackslashit default} assumption that environmental statistics undergo abrupt, unsignaled changes, even when environmental statistics are actually stable. Because exact Bayesian inference in this setting, an example of switching state space models, is computationally intense, a number of approximately Bayesian and heuristic algorithms have been proposed to account for learning/prediction in the brain. Here, we examine a neurally plausible algorithm, a special case of leaky integration dynamics we denote as EXP (for exponential filtering), that is significantly simpler than all previously suggested algorithms except for the delta-learning rule, and which far outperforms the delta rule in approximating Bayesian prediction performance. We derive the theoretical relationship between DBM and EXP, and show that EXP gains computational efficiency by foregoing the representation of inferential uncertainty (as does the delta rule), but that it nevertheless achieves near-Bayesian performance due to its ability to incorporate a "persistent prior" influence unique to DBM and absent from the other algorithms. Furthermore, we show that EXP is comparable to DBM but better than all other models in reproducing human behavior in a visual search task, suggesting that human learning and prediction also incorporates an element of persistent prior. More broadly, our work demonstrates that when observations are information-poor, detecting changes or modulating the learning rate is both {\textbackslashit difficult} and (thus) {\textbackslashit unnecessary} for making Bayes-optimal predictions.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2018
Autor(en): Ryali, Chaitanya ; Reddy, Gautam ; Yu, Angela J
Art des Eintrags: Bibliographie
Titel: Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation
Sprache: Englisch
Publikationsjahr: 2018
Ort: Montréal; Canada
Verlag: Curran Associates, Inc.
Buchtitel: Advances in Neural Information Processing Systems
Band einer Reihe: 31
Veranstaltungstitel: Advances in Neural Information Processing Systems 31
Veranstaltungsort: Palais des Congrès de Montréal
Veranstaltungsdatum: 02.12. - 08.12.2018
URL / URN: https://proceedings.neurips.cc/paper_files/paper/2018/hash/7...
Kurzbeschreibung (Abstract):

Understanding how humans and animals learn about statistical regularities in stable and volatile environments, and utilize these regularities to make predictions and decisions, is an important problem in neuroscience and psychology. Using a Bayesian modeling framework, specifically the Dynamic Belief Model (DBM), it has previously been shown that humans tend to make the {\textbackslashit default} assumption that environmental statistics undergo abrupt, unsignaled changes, even when environmental statistics are actually stable. Because exact Bayesian inference in this setting, an example of switching state space models, is computationally intense, a number of approximately Bayesian and heuristic algorithms have been proposed to account for learning/prediction in the brain. Here, we examine a neurally plausible algorithm, a special case of leaky integration dynamics we denote as EXP (for exponential filtering), that is significantly simpler than all previously suggested algorithms except for the delta-learning rule, and which far outperforms the delta rule in approximating Bayesian prediction performance. We derive the theoretical relationship between DBM and EXP, and show that EXP gains computational efficiency by foregoing the representation of inferential uncertainty (as does the delta rule), but that it nevertheless achieves near-Bayesian performance due to its ability to incorporate a "persistent prior" influence unique to DBM and absent from the other algorithms. Furthermore, we show that EXP is comparable to DBM but better than all other models in reproducing human behavior in a visual search task, suggesting that human learning and prediction also incorporates an element of persistent prior. More broadly, our work demonstrates that when observations are information-poor, detecting changes or modulating the learning rate is both {\textbackslashit difficult} and (thus) {\textbackslashit unnecessary} for making Bayes-optimal predictions.

Fachbereich(e)/-gebiet(e): 03 Fachbereich Humanwissenschaften
03 Fachbereich Humanwissenschaften > Institut für Psychologie
Hinterlegungsdatum: 10 Jan 2024 18:17
Letzte Änderung: 10 Jan 2024 18:17
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen