TU Darmstadt / ULB / TUbiblio

SyPer: Synthetic periocular data for quantized light-weight recognition in the NIR and visible domains

Kolf, Jan Niklas ; Elliesen, Jurek ; Boutros, Fadi ; Proença, Hugo ; Damer, Naser (2023)
SyPer: Synthetic periocular data for quantized light-weight recognition in the NIR and visible domains.
In: Image and Vision Computing, 135
doi: 10.1016/j.imavis.2023.104692
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

Deep-learning based periocular recognition systems typically use overparameterized deep neural networks associated with high computational costs and memory requirements. This is especially problematic for mobile and embedded devices in shared resource environments. To perform model quantization for lightweight periocular recognition in a privacy-aware manner, we propose and release SyPer, a synthetic dataset and generation model of periocular images. To enable this, we propose to perform the knowledge transfer in the quantization process on the embedding level and thus not identity-labeled data. This does not only allow the use of synthetic data for quantization, but it also successfully allows to perform the quantization on different domains to additionally boost the performance in new domains. In a variety of experiments on a diverse set of model backbones, we demonstrate the ability to build compact and accurate models through an embedding-level knowledge transfer using synthetic data. We also demonstrate very successfully the use of embedding-level knowledge transfer for near-infrared quantized models towards accurate and efficient periocular recognition on near-infrared images. The SyPer dataset, together with the evaluation protocol, the training code, and model checkpoints are made publicly available at https://github.com/jankolf/SyPer.

Typ des Eintrags: Artikel
Erschienen: 2023
Autor(en): Kolf, Jan Niklas ; Elliesen, Jurek ; Boutros, Fadi ; Proença, Hugo ; Damer, Naser
Art des Eintrags: Bibliographie
Titel: SyPer: Synthetic periocular data for quantized light-weight recognition in the NIR and visible domains
Sprache: Englisch
Publikationsjahr: Juli 2023
Verlag: Elsevier
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Image and Vision Computing
Jahrgang/Volume einer Zeitschrift: 135
DOI: 10.1016/j.imavis.2023.104692
Kurzbeschreibung (Abstract):

Deep-learning based periocular recognition systems typically use overparameterized deep neural networks associated with high computational costs and memory requirements. This is especially problematic for mobile and embedded devices in shared resource environments. To perform model quantization for lightweight periocular recognition in a privacy-aware manner, we propose and release SyPer, a synthetic dataset and generation model of periocular images. To enable this, we propose to perform the knowledge transfer in the quantization process on the embedding level and thus not identity-labeled data. This does not only allow the use of synthetic data for quantization, but it also successfully allows to perform the quantization on different domains to additionally boost the performance in new domains. In a variety of experiments on a diverse set of model backbones, we demonstrate the ability to build compact and accurate models through an embedding-level knowledge transfer using synthetic data. We also demonstrate very successfully the use of embedding-level knowledge transfer for near-infrared quantized models towards accurate and efficient periocular recognition on near-infrared images. The SyPer dataset, together with the evaluation protocol, the training code, and model checkpoints are made publicly available at https://github.com/jankolf/SyPer.

Freie Schlagworte: Biometrics, Face recognition, Quantization, Image generation
Zusätzliche Informationen:

Art.No.: 104692

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
Hinterlegungsdatum: 06 Jun 2023 12:48
Letzte Änderung: 06 Jun 2023 12:48
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen