TU Darmstadt / ULB / TUbiblio

Explainability of the Implications of Supervised and Unsupervised Face Image Quality Estimations Through Activation Map Variation Analyses in Face Recognition Models

Fu, Biying ; Damer, Naser (2022)
Explainability of the Implications of Supervised and Unsupervised Face Image Quality Estimations Through Activation Map Variation Analyses in Face Recognition Models.
IEEE/CVF Winter Conference on Applications of Computer Vision. virtual Conference (04.-08.01.2022)
doi: 10.1109/WACVW54805.2022.00041
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

It is challenging to derive explainability for unsupervised or statistical-based face image quality assessment (FIQA) methods. In this work, we propose a novel set of explainability tools to derive reasoning for different FIQA decisions and their face recognition (FR) performance implications. We avoid limiting the deployment of our tools to certain FIQA methods by basing our analyses on the behavior of FR models when processing samples with different FIQA decisions. This leads to explainability tools that can be applied for any FIQA method with any CNN-based FR solution using activation mapping to exhibit the network’s activation derived from the face embedding. To avoid the low discrimination between the general spatial activation mapping of low and high-quality images in FR models, we build our explainability tools in a higher derivative space by analyzing the variation of the FR activation maps of image sets with different quality decisions. We demonstrate our tools and analyze the findings on four FIQA methods, by presenting inter and intra-FIQA method analyses. Our proposed tools and the analyses based on them point out, among other conclusions, that high-quality images typically cause consistent low activation on the areas outside of the central face region, while low-quality images, despite general low activation, have high variations of activation in such areas. Our explainability tools also extend to analyzing single images where we show that low-quality images tend to have an FR model spatial activation that strongly differs from what is expected from a high-quality image where this difference also tends to appear more in areas outside of the central face region and does correspond to issues like extreme poses and facial occlusions. The implementation of the proposed tools is accessible here 1 . 1 https://github.com/fbiying87/Explainable_FIQA_WITH_AMVA

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Fu, Biying ; Damer, Naser
Art des Eintrags: Bibliographie
Titel: Explainability of the Implications of Supervised and Unsupervised Face Image Quality Estimations Through Activation Map Variation Analyses in Face Recognition Models
Sprache: Englisch
Publikationsjahr: 15 Februar 2022
Verlag: IEEE
Buchtitel: Proceedings: 2022 IEEE Winter Conference on Applications of Computer Vision Workshops
Veranstaltungstitel: IEEE/CVF Winter Conference on Applications of Computer Vision
Veranstaltungsort: virtual Conference
Veranstaltungsdatum: 04.-08.01.2022
DOI: 10.1109/WACVW54805.2022.00041
URL / URN: https://doi.org/10.1109/WACVW54805.2022.00041
Kurzbeschreibung (Abstract):

It is challenging to derive explainability for unsupervised or statistical-based face image quality assessment (FIQA) methods. In this work, we propose a novel set of explainability tools to derive reasoning for different FIQA decisions and their face recognition (FR) performance implications. We avoid limiting the deployment of our tools to certain FIQA methods by basing our analyses on the behavior of FR models when processing samples with different FIQA decisions. This leads to explainability tools that can be applied for any FIQA method with any CNN-based FR solution using activation mapping to exhibit the network’s activation derived from the face embedding. To avoid the low discrimination between the general spatial activation mapping of low and high-quality images in FR models, we build our explainability tools in a higher derivative space by analyzing the variation of the FR activation maps of image sets with different quality decisions. We demonstrate our tools and analyze the findings on four FIQA methods, by presenting inter and intra-FIQA method analyses. Our proposed tools and the analyses based on them point out, among other conclusions, that high-quality images typically cause consistent low activation on the areas outside of the central face region, while low-quality images, despite general low activation, have high variations of activation in such areas. Our explainability tools also extend to analyzing single images where we show that low-quality images tend to have an FR model spatial activation that strongly differs from what is expected from a high-quality image where this difference also tends to appear more in areas outside of the central face region and does correspond to issues like extreme poses and facial occlusions. The implementation of the proposed tools is accessible here 1 . 1 https://github.com/fbiying87/Explainable_FIQA_WITH_AMVA

Freie Schlagworte: Biometrics, Face recognition, Quality estimation, Deep learning, Explainability
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
Hinterlegungsdatum: 03 Mär 2022 09:20
Letzte Änderung: 03 Mär 2022 09:20
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen