TU Darmstadt / ULB / TUbiblio

D-ID-Net: Two-Stage Domain and Identity Learning for Identity-Preserving Image Generation From Semantic Segmentation

Damer, Naser and Boutros, Fadi and Kirchbuchner, Florian and Kuijper, Arjan (2019):
D-ID-Net: Two-Stage Domain and Identity Learning for Identity-Preserving Image Generation From Semantic Segmentation.
pp. 3677-3682, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea (South), 27.-28. Oct., 2019, DOI: 10.1109/ICCVW.2019.00454,
[Conference or Workshop Item]

Abstract

Training functionality-demanding AR/VR systems require accurate and robust gaze estimation and tracking solutions. Achieving such a performance requires the availability of diverse eye image data that might only be acquired by the means of image generation. Works addressing the generation of such images did not target realistic and identity-specific images, nor did they address the practicalrelevant case of generation from semantic labels. Therefore, this work proposes a solution to generate realistic and identity-specific images that correspond to semantic labels, given samples of a specific identity. Our proposed solution consists of two stages. In the first stage, a network is trained to transform the semantic label into a corresponding eye image of a generic identity. The second stage is an identity-specific network that induces identity details on the generic eye image. The results of our D-ID-Net solutions shows a high degree of identity-preservation and similarity to the ground-truth images, with an RMSE of 7.235.

Item Type: Conference or Workshop Item
Erschienen: 2019
Creators: Damer, Naser and Boutros, Fadi and Kirchbuchner, Florian and Kuijper, Arjan
Title: D-ID-Net: Two-Stage Domain and Identity Learning for Identity-Preserving Image Generation From Semantic Segmentation
Language: English
Abstract:

Training functionality-demanding AR/VR systems require accurate and robust gaze estimation and tracking solutions. Achieving such a performance requires the availability of diverse eye image data that might only be acquired by the means of image generation. Works addressing the generation of such images did not target realistic and identity-specific images, nor did they address the practicalrelevant case of generation from semantic labels. Therefore, this work proposes a solution to generate realistic and identity-specific images that correspond to semantic labels, given samples of a specific identity. Our proposed solution consists of two stages. In the first stage, a network is trained to transform the semantic label into a corresponding eye image of a generic identity. The second stage is an identity-specific network that induces identity details on the generic eye image. The results of our D-ID-Net solutions shows a high degree of identity-preservation and similarity to the ground-truth images, with an RMSE of 7.235.

Uncontrolled Keywords: Biometrics Head mounted displays Image generation
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Interactive Graphics Systems
20 Department of Computer Science > Mathematical and Applied Visual Computing
Event Title: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
Event Location: Seoul, Korea (South)
Event Dates: 27.-28. Oct., 2019
Date Deposited: 17 Apr 2020 10:13
DOI: 10.1109/ICCVW.2019.00454
Official URL: https://ieeexplore.ieee.org/document/9021978
Export:
Suche nach Titel in: TUfind oder in Google
Send an inquiry Send an inquiry

Options (only for editors)
Show editorial Details Show editorial Details