TU Darmstadt / ULB / TUbiblio

Fairness in face presentation attack detection

Fang, Meiling ; Yang, Wufei ; Kuijper, Arjan ; S̆truc, Vitomir ; Damer, Naser (2024)
Fairness in face presentation attack detection.
In: Pattern Recognition
doi: 10.1016/j.patcog.2023.110002
Artikel, Bibliographie

Kurzbeschreibung (Abstract)

Face recognition (FR) algorithms have been proven to exhibit discriminatory behaviors against certain demographic and non-demographic groups, raising ethical and legal concerns regarding their deployment in real-world scenarios. Despite the growing number of fairness studies in FR, the fairness of face presentation attack detection (PAD) has been overlooked, mainly due to the lack of appropriately annotated data. To avoid and mitigate the potential negative impact of such behavior, it is essential to assess the fairness in face PAD and develop fair PAD models. To enable fairness analysis in face PAD, we present a Combined Attribute Annotated PAD Dataset (CAADPAD), offering seven human-annotated attribute labels. Then, we comprehensively analyze the fairness of PAD and its relation to the nature of the training data and the Operational Decision Threshold Assignment (ODTA) through a set of face PAD solutions. Additionally, we propose a novel metric, the Accuracy Balanced Fairness (ABF), that jointly represents both the PAD fairness and the absolute PAD performance. The experimental results pointed out that female and faces with occluding features (e.g. eyeglasses, beard, etc.) are relatively less protected than male and non-occlusion groups by all PAD solutions. To alleviate this observed unfairness, we propose a plug-and-play data augmentation method, FairSWAP, to disrupt the identity / semantic information and encourage models to mine the attack clues. The extensive experimental results indicate that FairSWAP leads to betterperforming and fairer face PADs in 10 out of 12 investigated cases.

Typ des Eintrags: Artikel
Erschienen: 2024
Autor(en): Fang, Meiling ; Yang, Wufei ; Kuijper, Arjan ; S̆truc, Vitomir ; Damer, Naser
Art des Eintrags: Bibliographie
Titel: Fairness in face presentation attack detection
Sprache: Englisch
Publikationsjahr: 2024
Titel der Zeitschrift, Zeitung oder Schriftenreihe: Pattern Recognition
Band einer Reihe: 147
DOI: 10.1016/j.patcog.2023.110002
URL / URN: https://doi.org/10.1016/j.patcog.2023.110002
Kurzbeschreibung (Abstract):

Face recognition (FR) algorithms have been proven to exhibit discriminatory behaviors against certain demographic and non-demographic groups, raising ethical and legal concerns regarding their deployment in real-world scenarios. Despite the growing number of fairness studies in FR, the fairness of face presentation attack detection (PAD) has been overlooked, mainly due to the lack of appropriately annotated data. To avoid and mitigate the potential negative impact of such behavior, it is essential to assess the fairness in face PAD and develop fair PAD models. To enable fairness analysis in face PAD, we present a Combined Attribute Annotated PAD Dataset (CAADPAD), offering seven human-annotated attribute labels. Then, we comprehensively analyze the fairness of PAD and its relation to the nature of the training data and the Operational Decision Threshold Assignment (ODTA) through a set of face PAD solutions. Additionally, we propose a novel metric, the Accuracy Balanced Fairness (ABF), that jointly represents both the PAD fairness and the absolute PAD performance. The experimental results pointed out that female and faces with occluding features (e.g. eyeglasses, beard, etc.) are relatively less protected than male and non-occlusion groups by all PAD solutions. To alleviate this observed unfairness, we propose a plug-and-play data augmentation method, FairSWAP, to disrupt the identity / semantic information and encourage models to mine the attack clues. The extensive experimental results indicate that FairSWAP leads to betterperforming and fairer face PADs in 10 out of 12 investigated cases.

Freie Schlagworte: Biometrics, Face recognition, Spoofing attacks, Fairness, Information security
Zusätzliche Informationen:

Artikel 110002

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Graphisch-Interaktive Systeme
20 Fachbereich Informatik > Mathematisches und angewandtes Visual Computing
Hinterlegungsdatum: 26 Jun 2024 09:50
Letzte Änderung: 06 Aug 2024 07:50
PPN: 520351282
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen