TU Darmstadt / ULB / TUbiblio

D-ID-Net: Two-Stage Domain and Identity Learning for Identity-Preserving Image Generation From Semantic Segmentation

Damer, Naser ; Boutros, Fadi ; Kirchbuchner, Florian ; Kuijper, Arjan (2019)
D-ID-Net: Two-Stage Domain and Identity Learning for Identity-Preserving Image Generation From Semantic Segmentation.
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Seoul, Korea (South) (27.10.2019-28.10.2019)
doi: 10.1109/ICCVW.2019.00454
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Training functionality-demanding AR/VR systems require accurate and robust gaze estimation and tracking solutions. Achieving such a performance requires the availability of diverse eye image data that might only be acquired by the means of image generation. Works addressing the generation of such images did not target realistic and identity-specific images, nor did they address the practicalrelevant case of generation from semantic labels. Therefore, this work proposes a solution to generate realistic and identity-specific images that correspond to semantic labels, given samples of a specific identity. Our proposed solution consists of two stages. In the first stage, a network is trained to transform the semantic label into a corresponding eye image of a generic identity. The second stage is an identity-specific network that induces identity details on the generic eye image. The results of our D-ID-Net solutions shows a high degree of identity-preservation and similarity to the ground-truth images, with an RMSE of 7.235.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2019
Autor(en): Damer, Naser ; Boutros, Fadi ; Kirchbuchner, Florian ; Kuijper, Arjan
Art des Eintrags: Bibliographie
Titel: D-ID-Net: Two-Stage Domain and Identity Learning for Identity-Preserving Image Generation From Semantic Segmentation
Sprache: Englisch
Publikationsjahr: 2019
Veranstaltungstitel: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
Veranstaltungsort: Seoul, Korea (South)
Veranstaltungsdatum: 27.10.2019-28.10.2019
DOI: 10.1109/ICCVW.2019.00454
URL / URN: https://ieeexplore.ieee.org/document/9021978
Kurzbeschreibung (Abstract):

Training functionality-demanding AR/VR systems require accurate and robust gaze estimation and tracking solutions. Achieving such a performance requires the availability of diverse eye image data that might only be acquired by the means of image generation. Works addressing the generation of such images did not target realistic and identity-specific images, nor did they address the practicalrelevant case of generation from semantic labels. Therefore, this work proposes a solution to generate realistic and identity-specific images that correspond to semantic labels, given samples of a specific identity. Our proposed solution consists of two stages. In the first stage, a network is trained to transform the semantic label into a corresponding eye image of a generic identity. The second stage is an identity-specific network that induces identity details on the generic eye image. The results of our D-ID-Net solutions shows a high degree of identity-preservation and similarity to the ground-truth images, with an RMSE of 7.235.

Freie Schlagworte: Biometrics Head mounted displays Image generation
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
20 Fachbereich Informatik > Mathematisches und angewandtes Visual Computing
Hinterlegungsdatum: 17 Apr 2020 10:13
Letzte Änderung: 17 Apr 2020 10:13
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen