TU Darmstadt / ULB / TUbiblio

Deep learning-based pupil model predicts time and spectral dependent light responses

Zandi, Babak ; Khanh, Tran Quoc (2022)
Deep learning-based pupil model predicts time and spectral dependent light responses.
In: Scientific Reports, 2022, 11
doi: 10.26083/tuprints-00021202
Artikel, Zweitveröffentlichung, Verlagsversion

WarnungEs ist eine neuere Version dieses Eintrags verfügbar.

Kurzbeschreibung (Abstract)

Although research has made significant findings in the neurophysiological process behind the pupillary light reflex, the temporal prediction of the pupil diameter triggered by polychromatic or chromatic stimulus spectra is still not possible. State of the art pupil models rested in estimating a static diameter at the equilibrium-state for spectra along the Planckian locus. Neither the temporal receptor-weighting nor the spectral-dependent adaptation behaviour of the afferent pupil control path is mapped in such functions. Here we propose a deep learning-driven concept of a pupil model, which reconstructs the pupil’s time course either from photometric and colourimetric or receptor-based stimulus quantities. By merging feed-forward neural networks with a biomechanical differential equation, we predict the temporal pupil light response with a mean absolute error below 0.1 mm from polychromatic (2007 ± 1 K, 4983 ± 3 K, 10,138 ± 22 K) and chromatic spectra (450 nm, 530 nm, 610 nm, 660 nm) at 100.01 ± 0.25 cd/m². This non-parametric and self-learning concept could open the door to a generalized description of the pupil behaviour.

Typ des Eintrags: Artikel
Erschienen: 2022
Autor(en): Zandi, Babak ; Khanh, Tran Quoc
Art des Eintrags: Zweitveröffentlichung
Titel: Deep learning-based pupil model predicts time and spectral dependent light responses
Sprache: Englisch
Publikationsjahr: 2022
Ort: Darmstadt
Publikationsdatum der Erstveröffentlichung: 2022
Verlag: Springer Nature
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Scientific Reports
Jahrgang/Volume einer Zeitschrift: 11
Kollation: 16 Seiten
DOI: 10.26083/tuprints-00021202
URL / URN: https://tuprints.ulb.tu-darmstadt.de/21202
Zugehörige Links:
Herkunft: Zweitveröffentlichung aus gefördertem Golden Open Access
Kurzbeschreibung (Abstract):

Although research has made significant findings in the neurophysiological process behind the pupillary light reflex, the temporal prediction of the pupil diameter triggered by polychromatic or chromatic stimulus spectra is still not possible. State of the art pupil models rested in estimating a static diameter at the equilibrium-state for spectra along the Planckian locus. Neither the temporal receptor-weighting nor the spectral-dependent adaptation behaviour of the afferent pupil control path is mapped in such functions. Here we propose a deep learning-driven concept of a pupil model, which reconstructs the pupil’s time course either from photometric and colourimetric or receptor-based stimulus quantities. By merging feed-forward neural networks with a biomechanical differential equation, we predict the temporal pupil light response with a mean absolute error below 0.1 mm from polychromatic (2007 ± 1 K, 4983 ± 3 K, 10,138 ± 22 K) and chromatic spectra (450 nm, 530 nm, 610 nm, 660 nm) at 100.01 ± 0.25 cd/m². This non-parametric and self-learning concept could open the door to a generalized description of the pupil behaviour.

Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-212024
Sachgruppe der Dewey Dezimalklassifikatin (DDC): 600 Technik, Medizin, angewandte Wissenschaften > 600 Technik
600 Technik, Medizin, angewandte Wissenschaften > 620 Ingenieurwissenschaften und Maschinenbau
Fachbereich(e)/-gebiet(e): 18 Fachbereich Elektrotechnik und Informationstechnik
18 Fachbereich Elektrotechnik und Informationstechnik > Adaptive Lichttechnische Systeme und Visuelle Verarbeitung
Hinterlegungsdatum: 04 Mai 2022 13:49
Letzte Änderung: 27 Okt 2023 10:13
PPN:
Export:
Suche nach Titel in: TUfind oder in Google

Verfügbare Versionen dieses Eintrags

Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen