Florian Kirchbuchner

@Https:/

Smart Living & Biometric Technologies
Fraunhofer Institute for Computer Graphics Research IGD



                       

https://researchid.co/floriankirchbuchner
92

Scopus Publications

2425

Scholar Citations

30

Scholar h-index

63

Scholar i10-index

Scopus Publications

  • VeinXam: A Low-Cost Deep Veins Function Assessment
    Vincent Abt, Julian von Wilmsdorff, Silvia Faquiri, and Florian Kirchbuchner

    ACM
    A venous insufficiency, to which the deep veins of the lower human extremities are particularly susceptible, can lead to serious diseases, such as a deep vein thrombosis (DVT) with subsequent risks of severe implications, e.g. pulmonary embolism or a post-thrombotic syndrome (PTS) [6]. The current standard procedure to diagnose venous insufficiency is performed exclusively in medical offices and hospitals in the form of in-patient treatments with special medical equipment. This hurdle for the patient, combined with the often diffuse symptoms of venous insufficiency [7], may lead to a late discovery of diseases such as DVTs and increases the risk of secondary diseases as well as treatment costs [3]. To address these issues, we propose a novel method for continuous monitoring of the current venous function by adapting the Light Reflection Rheography (LLR) and using low-cost wearable sensor technology and a smartphone app, aiming to deliver critical early stage information about pathological changes of the blood flow in the lower limbs.

  • Pixel-Level Face Image Quality Assessment for Explainable Face Recognition
    Philipp Terhörst, Marco Huber, Naser Damer, Florian Kirchbuchner, Kiran Raja, and Arjan Kuijper

    Institute of Electrical and Electronics Engineers (IEEE)
    An essential factor to achieve high performance in face recognition systems is the quality of its samples. Since these systems are involved in daily life there is a strong need of making face recognition processes understandable for humans. In this work, we introduce the concept of pixel-level face image quality that determines the utility of pixels in a face image for recognition. We propose a training-free approach to assess the pixel-level qualities of a face image given an arbitrary face recognition network. To achieve this, a model-specific quality value of the input image is estimated and used to build a sample-specific quality regression model. Based on this model, quality-based gradients are back-propagated and converted into pixel-level quality estimates. In the experiments, we qualitatively and quantitatively investigated the meaningfulness of our proposed pixel-level qualities based on real and artificial disturbances and by comparing the explanation maps on faces incompliant with the ICAO standards. In all scenarios, the results demonstrate that the proposed solution produces meaningful pixel-level qualities enhancing the interpretability of the complete face image quality. The code is publicly available

  • Ubiquitous multi-occupant detection in smart environments
    Daniel Fährmann, Fadi Boutros, Philipp Kubon, Florian Kirchbuchner, Arjan Kuijper, and Naser Damer

    Springer Science and Business Media LLC
    AbstractRecent advancements in ubiquitous computing have emphasized the need for privacy-preserving occupancy detection in smart environments to enhance security. This work presents a novel occupancy detection solution utilizing privacy-aware sensing technologies. The solution analyzes time-series data to detect not only occupancy as a binary problem, but also determines whether one or multiple individuals are present in an indoor environment. On three real-world datasets, our models outperformed various state-of-the-art algorithms, achieving F1-scores up to 94.91% in single-occupancy detection and a macro F1-score of 91.55% in multi-occupancy detection. This makes our approach a promising solution for improving security in smart environments.

  • Uncertainty-aware Comparison Scores for Face Recognition
    Marco Huber, Philipp Terhörst, Florian Kirchbuchner, Arjan Kuijper, and Naser Damer

    IEEE
    Estimating and understanding uncertainty in face recognition systems is receiving increasing attention as face recognition systems spread worldwide and process privacy and security-related data. In this work, we investigate how such uncertainties can be further utilized to increase the accuracy and therefore the trust of automatic face recognition systems. We propose to use the uncertainties of extracted face features to compute a new uncertainty-aware comparison score (UACS). This score takes into account the estimated uncertainty during the calculation of the comparison score, leading to a reduction in verification errors. To achieve this, we model the comparison score and its uncertainty as a probability distribution and measure its distance to a distribution of an ideal genuine comparison. In extended experiments with three face recognition models and on six benchmarks, we investigated the impact of our approach and demonstrated its benefits in enhancing the verification performance and the genuine-imposter comparison scores separability.

  • QMagFace: Simple and Accurate Quality-Aware Face Recognition
    Philipp Terhorst, Malte Ihlefeld, Marco Huber, Naser Damer, Florian Kirchbuchner, Kiran Raja, and Arjan Kuijper

    IEEE
    In this work, we propose QMagFace, a simple and effective face recognition solution (QMagFace) that combines a quality-aware comparison score with a recognition model based on a magnitude-aware angular margin loss. The proposed approach includes model-specific face image qualities in the comparison process to enhance the recognition performance under unconstrained circumstances. Exploiting the linearity between the qualities and their comparison scores induced by the utilized loss, our quality-aware comparison function is simple and highly generalizable. The experiments conducted on several face recognition databases and benchmarks demonstrate that the introduced quality-awareness leads to consistent improvements in the recognition performance. Moreover, the proposed QMagFace approach performs especially well under challenging circumstances, such as cross-pose, cross-age, or cross-quality. Consequently, it leads to state-of-the-art performances on several face recognition benchmarks, such as 98.50% on AgeDB, 83.95% on XQLFQ, and 98.74% on CFP-FP. The code for QMagFace is publicly available1.

  • Masked face recognition: Human versus machine
    Naser Damer, Fadi Boutros, Marius Süßmilch, Meiling Fang, Florian Kirchbuchner, and Arjan Kuijper

    Institution of Engineering and Technology (IET)

  • Lightweight Long Short-Term Memory Variational Auto-Encoder for Multivariate Time Series Anomaly Detection in Industrial Control Systems
    Daniel Fährmann, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper

    MDPI AG
    Heterogeneous cyberattacks against industrial control systems (ICSs) have had a strong impact on the physical world in recent decades. Connecting devices to the internet enables new attack surfaces for attackers. The intrusion of ICSs, such as the manipulation of industrial sensory or actuator data, can be the cause for anomalous ICS behaviors. This poses a threat to the infrastructure that is critical for the operation of a modern city. Nowadays, the best techniques for detecting anomalies in ICSs are based on machine learning and, more recently, deep learning. Cybersecurity in ICSs is still an emerging field, and industrial datasets that can be used to develop anomaly detection techniques are rare. In this paper, we propose an unsupervised deep learning methodology for anomaly detection in ICSs, specifically, a lightweight long short-term memory variational auto-encoder (LW-LSTM-VAE) architecture. We successfully demonstrate our solution under two ICS applications, namely, water purification and water distribution plants. Our proposed method proves to be efficient in detecting anomalies in these applications and improves upon reconstruction-based anomaly detection methods presented in previous work. For example, we successfully detected 82.16% of the anomalies in the scenario of the widely used Secure Water Treatment (SWaT) benchmark. The deep learning architecture we propose has the added advantage of being extremely lightweight.

  • Self-restrained triplet loss for accurate masked face recognition
    Fadi Boutros, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper

    Elsevier BV
    Using the face as a biometric identity trait is motivated by the contactless nature of the capture process and the high accuracy of the recognition algorithms. After the current COVID-19 pandemic, wearing a face mask has been imposed in public places to keep the pandemic under control. However, face occlusion due to wearing a mask presents an emerging challenge for face recognition systems. In this paper, we present a solution to improve masked face recognition performance. Specifically, we propose the Embedding Unmasking Model (EUM) operated on top of existing face recognition models. We also propose a novel loss function, the Self-restrained Triplet (SRT), which enabled the EUM to produce embeddings similar to these of unmasked faces of the same identities. The achieved evaluation results on three face recognition models, two real masked datasets, and two synthetically generated masked face datasets proved that our proposed approach significantly improves the performance in most experimental settings.

  • Template-Driven Knowledge Distillation for Compact and Accurate Periocular Biometrics Deep-Learning Models
    Fadi Boutros, Naser Damer, Kiran Raja, Florian Kirchbuchner, and Arjan Kuijper

    MDPI AG
    This work addresses the challenge of building an accurate and generalizable periocular recognition model with a small number of learnable parameters. Deeper (larger) models are typically more capable of learning complex information. For this reason, knowledge distillation (kd) was previously proposed to carry this knowledge from a large model (teacher) into a small model (student). Conventional KD optimizes the student output to be similar to the teacher output (commonly classification output). In biometrics, comparison (verification) and storage operations are conducted on biometric templates, extracted from pre-classification layers. In this work, we propose a novel template-driven KD approach that optimizes the distillation process so that the student model learns to produce templates similar to those produced by the teacher model. We demonstrate our approach on intra- and cross-device periocular verification. Our results demonstrate the superiority of our proposed approach over a network trained without KD and networks trained with conventional (vanilla) KD. For example, the targeted small model achieved an equal error rate (EER) value of 22.2% on cross-device verification without KD. The same model achieved an EER of 21.9% with the conventional KD, and only 14.7% EER when using our proposed template-driven KD.

  • Real masks and spoof faces: On the masked face presentation attack detection
    Meiling Fang, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper

    Elsevier BV
    Face masks have become one of the main methods for reducing the transmission of COVID-19. This makes face recognition (FR) a challenging task because masks hide several discriminative features of faces. Moreover, face presentation attack detection (PAD) is crucial to ensure the security of FR systems. In contrast to the growing number of masked FR studies, the impact of face masked attacks on PAD has not been explored. Therefore, we present novel attacks with real face masks placed on presentations and attacks with subjects wearing masks to reflect the current real-world situation. Furthermore, this study investigates the effect of masked attacks on PAD performance by using seven state-of-the-art PAD algorithms under different experimental settings. We also evaluate the vulnerability of FR systems to masked attacks. The experiments show that real masked attacks pose a serious threat to the operation and security of FR systems.

  • Lightweight Periocular Recognition through Low-bit Quantization
    Jan Niklas Kolf, Fadi Boutros, Florian Kirchbuchner, and Naser Damer

    IEEE
    Deep learning-based systems for periocular recognition make use of the high recognition performance of neural networks, which, however, is accompanied by high computational costs and memory footprints. This can lead to deployability problems, especially in mobile devices and embedded systems. Few previous works strived towards building lighter models, however, while still depending on floating-point numbers associated with higher computational cost and memory footprint. In this paper, we propose to adapt model quantization for periocular recognition. This, within the proposed scheme, leads to reducing the memory footprint of periocular recognition network by up to five folds while maintaining high recognition performance. We present a comprehensive analysis over three backbones and diverse experimental protocols to stress the consistency of our conclusions, along with a comparison with a wide set of baselines that prove the optimal trade-off between performance and model size achieved by our proposed solution. The code and pre-trained models have been made available at https://github.com/jankolf/ijcb-periocular-quantization.

  • Stating Comparison Score Uncertainty and Verification Decision Confidence Towards Transparent Face Recognition


  • On the (Limited) Generalization of MasterFace Attacks and Its Relation to the Capacity of Face Representations
    Philipp Terhorst, Florian Bierbaum, Marco Huber, Naser Damer, Florian Kirchbuchner, Kiran Raja, and Arjan Kuijper

    IEEE
    A MasterFace is a face image that can successfully match against a large portion of the population. Since their generation does not require access to the information of the enrolled subjects, MasterFace attacks represent a potential security risk for widely-used face recognition systems. Previous works proposed methods for generating such images and demonstrated that these attacks can strongly compromise face recognition. However, previous works followed evaluation settings consisting of older recognition models, limited cross-dataset and cross-model evaluations, and the use of low-scale testing data. This makes it hard to state the generalizability of these attacks. In this work, we comprehensively analyse the generalizability of MasterFace attacks in empirical and theoretical investigations. The empirical investigations include the use of six state-of-the-art face recognition models, cross-dataset and cross-model evaluation protocols, and utilizing testing datasets of significantly higher size and variance. The results indicate a low generalizability when MasterFaces are training on a different face recognition model than the one used for testing. In these cases, the attack performance is similar to zero-effort imposter attacks. In the theoretical investigations, we define and estimate the face capacity and the maximum MasterFace coverage under the assumption that identities in the face space are well separated. The current trend of increasing the fairness and generalizability in face recognition indicates that the vulnerability of future systems might further decrease. Future works might analyse the utility of MasterFaces for understanding and enhancing the robustness of face recognition models.

  • Verification of Sitter Identity Across Historical Portrait Paintings by Confidence-aware Face Recognition
    Marco Huber, Philipp Terhorst, Anh Thi Luu, Florian Kirchbuchner, and Naser Damer

    IEEE
    Verifying the identity of a person (sitter) portrayed in a historical painting is often a challenging but critical task in art historian research. In many cases, this information has been lost due to time or other circumstances and today there are only speculations of art historians about which person it could be. Art historians often use subjective factors for this purpose and then infer from the identity information about the person depicted in terms of his or her life, status, and era. On the other hand, automated face recognition has achieved a high level of accuracy, especially on photographs, and considers objective factors to determine the identity or verify a suspected identity. The limited amount of data, as well as the domain-specific challenges, make the use of automated face recognition methods in the domain of historic paintings difficult. We propose a specialized, likelihood-based fusion method to enable deep learning-based face recognition on historic portrait paintings. We additionally propose a method to accurately determine the confidence of the made decision to assist art historians in their research. For this purpose, we used a model trained on common photographs and adapted it to the domain of historical paintings through transfer learning. By using an underlying challenge dataset, we compute the likelihood for the assumed identity against reference images of the identity and fuse them to utilize as much information as possible. From these results of the likelihoods fusion, we then derive decision confidence to make statements to determine the certainty of the model’s decision. The experiments were carried out in a leave-one-out evaluation scenario on our created database, the largest authentic database of historic portrait paintings to date, consisting of over 760 portrait paintings of 210 different sitters by over 250 different artists. The experiments demonstrated, that a) the proposed approach outperforms pure face recognition solutions, b) the fusion approach effectively combines the sitter information towards a higher verification accuracy, and c) the proposed confidence estimation approach is highly successful in capturing the estimated accuracy of the decision. The meta-information of the used historic face images can be found at https://github.com/marcohuber/HistoricalFaces.

  • Low-resolution Iris Recognition via Knowledge Transfer
    Fadi Boutros, Olga Kaehm, Meiling Fang, Florian Kirchbuchner, Naser Damer, and Arjan Kuijper

    IEEE
    This work introduces a novel approach for extremely low-resolution iris recognition based on deep knowledge transfer. This work starts by adapting the penalty margin loss to the iris recognition problem. This included novel analyses on the appropriate penalty margin for iris recognition. Additionally, this work presents analyses toward finding the optimal deeply learned representation dimension for the identity information embedded in the iris capture. Most importantly, this work proposes a training framework that aims at producing iris deep representations from extremely low-resolution that are similar to those of high resolution. This was realized by the controllable knowledge transfer of an iris recognition model trained for high-resolution images into a model that is specifically trained for extremely low-resolution irises. The presented approach leads to the reduction of the verification errors by more than 3 folds, in comparison to the traditionally trained model for low-resolution iris recognition.

  • On Evaluating Pixel-Level Face Image Quality Assessment


  • Acquisition of EFS and Capacitive Measurement Data on Low-Power and Connected IoT Devices
    Julian von Wilmsdorff, Malte Lenhart, Florian Kirchbuchner, and Arjan Kuijper

    Springer International Publishing

  • ElasticFace: Elastic Margin Loss for Deep Face Recognition
    F. Boutros, N. Damer, Florian Kirchbuchner and Arjan Kuijper


    Learning discriminative face features plays a major role in building high-performing face recognition models. The recent state-of-the-art face recognition solutions proposed to incorporate a fixed penalty margin on commonly used classification loss function, softmax loss, in the normalized hypersphere to increase the discriminative power of face recognition models, by minimizing the intra-class variation and maximizing the inter-class variation. Marginal penalty softmax losses, such as ArcFace and CosFace, assume that the geodesic distance between and within the different identities can be equally learned using a fixed penalty margin. However, such a learning objective is not realistic for real data with inconsistent inter-and intra-class variation, which might limit the discriminative and generalizability of the face recognition model. In this paper, we relax the fixed penalty margin constrain by proposing elastic penalty margin loss (ElasticFace) that allows flexibility in the push for class separability. The main idea is to utilize random margin values drawn from a normal distribution in each training iteration. This aims at giving the decision boundary chances to extract and retract to allow space for flexible class separability learning. We demonstrate the superiority of our ElasticFace loss over ArcFace and CosFace losses, using the same geometric transformation, on a large set of mainstream benchmarks. From a wider perspective, our ElasticFace has advanced the state-of-the-art face recognition performance on seven out of nine mainstream benchmarks. All training codes, pre-trained models, training logs will be publicly released 1.

  • Double Deep Q-Learning With Prioritized Experience Replay for Anomaly Detection in Smart Environments
    Daniel Fahrmann, Nils Jorek, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper

    Institute of Electrical and Electronics Engineers (IEEE)
    Anomaly detection in smart environments is important when dealing with rare events, which can be safety-critical to individuals or infrastructure. Safety-critical means in this case, that these events can be a threat to the safety of individuals (e.g. a person falling to the ground) or to the security of infrastructure (e.g. unauthorized access to protected facilities). However, recognizing abnormal events in smart environments is challenging, because of the complex and volatile nature of the data recorded by monitoring sensors. Methodologies proposed in the literature are frequently domain-specific and are subject to biased assumptions about the underlying data. In this work, we propose the adaption of a deep reinforcement learning algorithm, namely double deep q-learning (DDQN), for anomaly detection in smart environments. Our proposed anomaly detector directly learns a decision-making function, which can classify rare events based on multivariate sequential time series data. With an emphasis on improving the performance in rare event classification tasks, we extended the algorithm with a prioritized experience replay (PER) strategy, and showed that the PER extension yields an increase in detection performance. The adaption of the improved version of the DDQN reinforcement learning algorithm for anomaly detection in smart environments is the major contribution of this work. Empirical studies on publicly available real-world datasets demonstrate the effectiveness of our proposed solution. Here specifically, we use a dataset for fall and for occupancy detection to evaluate the solution proposed in this work. Our solution yields comparable detection performance to previous work, and has the additional advantages of being adaptable to different environments and capable of online learning.

  • PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation
    Fadi Boutros, Patrick Siebke, Marcel Klemt, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper

    Institute of Electrical and Electronics Engineers (IEEE)
    Deep neural networks have rapidly become the mainstream method for face recognition (FR). However, this limits the deployment of such models that contain an extremely large number of parameters to embedded and low-end devices. In this work, we present an extremely lightweight and accurate FR solution, namely PocketNet. We utilize neural architecture search to develop a new family of lightweight face-specific architectures. We additionally propose a novel training paradigm based on knowledge distillation (KD), the multi-step KD, where the knowledge is distilled from the teacher model to the student model at different stages of the training maturity. We conduct a detailed ablation study proving both, the sanity of using NAS for the specific task of FR rather than general object classification, and the benefits of our proposed multi-step KD. We present an extensive experimental evaluation and comparisons with the state-of-the-art (SOTA) compact FR models on nine different benchmarks including large-scale evaluation benchmarks such as IJB-C and MegaFace. PocketNets have consistently advanced the SOTA FR performance on nine mainstream benchmarks when considering the same level of model compactness. With 0.92M parameters, our smallest network PocketNetS-128 achieved very competitive results to recent SOTA compacted models that contain up to 4M parameters. Training codes and pre-trained models are public.1

  • Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection
    Meiling Fang, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper

    IEEE
    With the increased deployment of face recognition systems in our daily lives, face presentation attack detection (PAD) is attracting much attention and playing a key role in securing face recognition systems. Despite the great performance achieved by the hand-crafted and deep-learning-based methods in intra-dataset evaluations, the performance drops when dealing with unseen scenarios. In this work, we propose a dual-stream convolution neural networks (CNNs) framework. One stream adapts four learnable frequency filters to learn features in the frequency domain, which are less influenced by variations in sensors/illuminations. The other stream leverages the RGB images to complement the features of the frequency domain. Moreover, we propose a hierarchical attention module integration to join the information from the two streams at different stages by considering the nature of deep features in different layers of the CNN. The proposed method is evaluated in the intra-dataset and cross-dataset setups, and the results demonstrate that our proposed approach enhances the generalizability in most experimental setups in comparison to state-of-the-art, including the methods designed explicitly for domain adaption/shift problems. We successfully prove the design of our proposed PAD solution in a stepwise ablation study that involves our proposed learnable frequency decomposition, our hierarchical attention module design, and the used loss function. Training codes and pre-trained models are publicly released 1.

  • The overlapping effect and fusion protocols of data augmentation techniques in iris PAD
    Meiling Fang, Naser Damer, Fadi Boutros, Florian Kirchbuchner, and Arjan Kuijper

    Springer Science and Business Media LLC
    AbstractIris Presentation Attack Detection (PAD) algorithms address the vulnerability of iris recognition systems to presentation attacks. With the great success of deep learning methods in various computer vision fields, neural network-based iris PAD algorithms emerged. However, most PAD networks suffer from overfitting due to insufficient iris data variability. Therefore, we explore the impact of various data augmentation techniques on performance and the generalizability of iris PAD. We apply several data augmentation methods to generate variability, such as shift, rotation, and brightness. We provide in-depth analyses of the overlapping effect of these methods on performance. In addition to these widely used augmentation techniques, we also propose an augmentation selection protocol based on the assumption that various augmentation techniques contribute differently to the PAD performance. Moreover, two fusion methods are performed for more comparisons: the strategy-level and the score-level combination. We demonstrate experiments on two fine-tuned models and one trained from the scratch network and perform on the datasets in the Iris-LivDet-2017 competition designed for generalizability evaluation. Our experimental results show that augmentation methods improve iris PAD performance in many cases. Our least overlap-based augmentation selection protocol achieves the lower error rates for two networks. Besides, the shift augmentation strategy also exceeds state-of-the-art (SoTA) algorithms on the Clarkson and IIITD-WVU datasets.

  • On Soft-Biometric Information Stored in Biometric Face Embeddings
    Philipp Terhorst, Daniel Fahrmann, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper

    Institute of Electrical and Electronics Engineers (IEEE)
    The success of modern face recognition systems is based on the advances of deeply-learned features. These embeddings aim to encode the identity of an individual such that these can be used for recognition. However, recent works have shown that more information beyond the user’s identity is stored in these embeddings, such as demographics, image characteristics, and social traits. This raises privacy and bias concerns in face recognition. We investigate the predictability of 73 different soft-biometric attributes on three popular face embeddings with different learning principles. The experiments were conducted on two publicly available databases. For the evaluation, we trained a massive attribute classifier such that can accurately state the confidence of its predictions. This enables us to derive more sophisticated statements about the attribute predictability. The results demonstrate that the majority of the investigated attributes are encoded in face embeddings. For instance, a strong encoding was found for demographics, haircolors, hairstyles, beards, and accessories. Although face recognition embeddings are trained to be robust against non-permanent factors, we found that specifically these attributes are easily-predictable from face embeddings. We hope our findings will guide future works to develop more privacy-preserving and bias-mitigating face recognition technologies.

  • Extended evaluation of the effect of real and simulated masks on face recognition performance
    Naser Damer, Fadi Boutros, Marius Süßmilch, Florian Kirchbuchner, and Arjan Kuijper

    Institution of Engineering and Technology (IET)
    Abstract Face recognition is an essential technology in our daily lives as a contactless and convenient method of accurate identity verification. Processes such as secure login to electronic devices or identity verification at automatic border control gates are increasingly dependent on such technologies. The recent COVID‐19 pandemic has increased the focus on hygienic and contactless identity verification methods. The pandemic has led to the wide use of face masks, essential to keep the pandemic under control. The effect of mask‐wearing on face recognition in a collaborative environment is currently a sensitive yet understudied issue. Recent reports have tackled this by using face images with synthetic mask‐like face occlusions without exclusively assessing how representative they are of real face masks. These issues are addressed by presenting a specifically collected database containing three sessions, each with three different capture instructions, to simulate real use cases. The data are augmented to include previously used synthetic mask occlusions. Further studied is the effect of masked face probes on the behaviour of four face recognition systems—three academic and one commercial. This study evaluates both masked‐to‐non‐masked and masked‐to‐masked face comparisons. In addition, real masks in the database are compared with simulated masks to determine their comparative effects on face recognition performance.

  • Midecon: Unsupervised and accurate fingerprint and minutia quality assessment based on minutia detection confidence
    Philipp Terhorst, Andre Boller, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper

    IEEE
    An essential factor to achieve high accuracies in finger-print recognition systems is the quality of its samples. Previous works mainly proposed supervised solutions based on image properties that neglects the minutiae extraction process, despite that most fingerprint recognition techniques are based on detected minutiae. Consequently, a fingerprint image might be assigned a high quality even if the utilized minutia extractor produces unreliable information. In this work, we propose a novel concept of assessing minutia and fingerprint quality based on minutia detection confidence (MiDeCon). MiDeCon can be applied to an arbitrary deep learning based minutia extractor and does not require quality labels for learning. We propose using the detection reliability of the extracted minutia as its quality indicator. By combining the highest minutia qualities, MiDeCon also accurately determines the quality of a full fingerprint. Experiments are conducted on the publicly available databases of the FVC 2006 and compared against several baselines, such as NIST’s widely-used fingerprint image quality software NFIQ1 and NFIQ2. The results demonstrate a significantly stronger quality assessment performance of the proposed MiDeCon-qualities as related works on both, minutia- and fingerprint-level. The implementation is publicly available.

RECENT SCHOLAR PUBLICATIONS

  • Ubiquitous multi-occupant detection in smart environments
    D Fhrmann, F Boutros, P Kubon, F Kirchbuchner, A Kuijper, N Damer
    Neural Computing and Applications 36 (6), 2941-2960 2024

  • VeinXam: A Low-Cost Deep Veins Function Assessment
    V Abt, J von Wilmsdorff, S Faquiri, F Kirchbuchner
    Adjunct Proceedings of the 2023 ACM International Joint Conference on 2023

  • Uncertainty-aware Comparison Scores for Face Recognition
    M Huber, P Terhrst, F Kirchbuchner, A Kuijper, N Damer
    2023 11th International Workshop on Biometrics and Forensics (IWBF), 1-6 2023

  • Pixel-level face image quality assessment for explainable face recognition
    P Terhrst, M Huber, N Damer, F Kirchbuchner, K Raja, A Kuijper
    IEEE Transactions on Biometrics, Behavior, and Identity Science 2023

  • Qmagface: Simple and accurate quality-aware face recognition
    P Terhrst, M Ihlefeld, M Huber, N Damer, F Kirchbuchner, K Raja, ...
    Proceedings of the IEEE/CVF Winter Conference on Applications of Computer 2023

  • Stating comparison score uncertainty and verification decision confidence towards transparent face recognition
    M Huber, P Terhrst, F Kirchbuchner, N Damer, A Kuijper
    arXiv preprint arXiv:2210.10354 2022

  • Lightweight periocular recognition through low-bit quantization
    JN Kolf, F Boutros, F Kirchbuchner, N Damer
    2022 IEEE International Joint Conference on Biometrics (IJCB), 1-12 2022

  • On the (limited) generalization of masterface attacks and its relation to the capacity of face representations
    P Terhrst, F Bierbaum, M Huber, N Damer, F Kirchbuchner, K Raja, ...
    2022 IEEE International Joint Conference on Biometrics (IJCB), 1-9 2022

  • Low-resolution iris recognition via knowledge transfer
    F Boutros, O Kaehm, M Fang, F Kirchbuchner, N Damer, A Kuijper
    2022 International Conference of the Biometrics Special Interest Group 2022

  • Masked face recognition: Human versus machine
    N Damer, F Boutros, M Smilch, M Fang, F Kirchbuchner, A Kuijper
    IET Biometrics 11 (5), 512-528 2022

  • On evaluating pixel-level face image quality assessment
    M Huber, P Terhst, F Kirchbuchner, N Damer, A Kuijper
    2022 30th European Signal Processing Conference (EUSIPCO), 1052-1056 2022

  • Verification of sitter identity across historical portrait paintings by confidence-aware face recognition
    M Huber, P Terhrst, AT Luu, F Kirchbuchner, N Damer
    2022 26th International Conference on Pattern Recognition (ICPR), 938-944 2022

  • Double deep q-learning with prioritized experience replay for anomaly detection in smart environments
    D Fhrmann, N Jorek, N Damer, F Kirchbuchner, A Kuijper
    IEEE Access 10, 60836-60848 2022

  • Pocketnet: Extreme lightweight face recognition network using neural architecture search and multistep knowledge distillation
    F Boutros, P Siebke, M Klemt, N Damer, F Kirchbuchner, A Kuijper
    IEEE Access 10, 46823-46833 2022

  • Lightweight long short-term memory variational auto-encoder for multivariate time series anomaly detection in industrial control systems
    D Fhrmann, N Damer, F Kirchbuchner, A Kuijper
    Sensors 22 (8), 2886 2022

  • Self-restrained triplet loss for accurate masked face recognition
    F Boutros, N Damer, F Kirchbuchner, A Kuijper
    Pattern Recognition 124, 108473 2022

  • Template-driven knowledge distillation for compact and accurate periocular biometrics deep-learning models
    F Boutros, N Damer, K Raja, F Kirchbuchner, A Kuijper
    Sensors 22 (5), 1921 2022

  • Real masks and spoof faces: On the masked face presentation attack detection
    M Fang, N Damer, F Kirchbuchner, A Kuijper
    Pattern recognition 123, 108398 2022

  • Neural Networks for Indoor Localization based on Electric Field Sensing.
    F Kirchbuchner, M Andres, J von Wilmsdorff, A Kuijper
    DeLTA, 25-33 2022

  • The overlapping effect and fusion protocols of data augmentation techniques in iris PAD
    M Fang, N Damer, F Boutros, F Kirchbuchner, A Kuijper
    Machine Vision and Applications 33, 1-21 2022

MOST CITED SCHOLAR PUBLICATIONS

  • Elasticface: Elastic margin loss for deep face recognition
    F Boutros, N Damer, F Kirchbuchner, A Kuijper
    Proceedings of the IEEE/CVF conference on computer vision and pattern 2022
    Citations: 175

  • SER-FIQ: Unsupervised estimation of face image quality based on stochastic embedding robustness
    P Terhorst, JN Kolf, N Damer, F Kirchbuchner, A Kuijper
    Proceedings of the IEEE/CVF conference on computer vision and pattern 2020
    Citations: 161

  • The effect of wearing a mask on face recognition performance: an exploratory study
    N Damer, JH Grebe, C Chen, F Boutros, F Kirchbuchner, A Kuijper
    2020 International Conference of the Biometrics Special Interest Group 2020
    Citations: 119

  • Sensing technology for human activity recognition: A comprehensive survey
    B Fu, N Damer, F Kirchbuchner, A Kuijper
    Ieee Access 8, 83791-83820 2020
    Citations: 117

  • Self-restrained triplet loss for accurate masked face recognition
    F Boutros, N Damer, F Kirchbuchner, A Kuijper
    Pattern Recognition 124, 108473 2022
    Citations: 104

  • A comprehensive study on face recognition biases beyond demographics
    P Terhrst, JN Kolf, M Huber, F Kirchbuchner, N Damer, AM Moreno, ...
    IEEE Transactions on Technology and Society 3 (1), 16-30 2021
    Citations: 102

  • Ambient intelligence from senior citizens’ perspectives: Understanding privacy concerns, technology acceptance, and expectations
    F Kirchbuchner, T Grosse-Puppendahl, MR Hastall, M Distler, A Kuijper
    Ambient Intelligence: 12th European Conference, AmI 2015, Athens, Greece 2015
    Citations: 61

  • MFR 2021: Masked face recognition competition
    F Boutros, N Damer, JN Kolf, K Raja, F Kirchbuchner, R Ramachandra, ...
    2021 IEEE International joint conference on biometrics (IJCB), 1-10 2021
    Citations: 60

  • Real masks and spoof faces: On the masked face presentation attack detection
    M Fang, N Damer, F Kirchbuchner, A Kuijper
    Pattern recognition 123, 108398 2022
    Citations: 51

  • Mixfacenets: Extremely efficient face recognition networks
    F Boutros, N Damer, M Fang, F Kirchbuchner, A Kuijper
    2021 IEEE International Joint Conference on Biometrics (IJCB), 1-8 2021
    Citations: 51

  • Pocketnet: Extreme lightweight face recognition network using neural architecture search and multistep knowledge distillation
    F Boutros, P Siebke, M Klemt, N Damer, F Kirchbuchner, A Kuijper
    IEEE Access 10, 46823-46833 2022
    Citations: 48

  • Iris and periocular biometrics for head mounted displays: Segmentation, recognition, and synthetic data generation
    F Boutros, N Damer, K Raja, R Ramachandra, F Kirchbuchner, A Kuijper
    Image and Vision Computing 104, 104007 2020
    Citations: 46

  • Post-comparison mitigation of demographic bias in face recognition using fair score normalization
    P Terhrst, JN Kolf, N Damer, F Kirchbuchner, A Kuijper
    Pattern Recognition Letters 140, 332-338 2020
    Citations: 44

  • Face quality estimation and its correlation to demographic and non-demographic bias in face recognition
    P Terhrst, JN Kolf, N Damer, F Kirchbuchner, A Kuijper
    2020 IEEE International Joint Conference on Biometrics (IJCB), 1-11 2020
    Citations: 43

  • A multi-detector solution towards an accurate and generalized detection of face morphing attacks
    N Damer, S Zienert, Y Wainakh, AM Saladie, F Kirchbuchner, A Kuijper
    2019 22th International Conference on Information Fusion (FUSION), 1-8 2019
    Citations: 38

  • Extended evaluation of the effect of real and simulated masks on face recognition performance
    N Damer, F Boutros, M Smilch, F Kirchbuchner, A Kuijper
    Iet Biometrics 10 (5), 548-561 2021
    Citations: 37

  • Learnable multi-level frequency decomposition and hierarchical attention mechanism for generalized face presentation attack detection
    M Fang, N Damer, F Kirchbuchner, A Kuijper
    Proceedings of the IEEE/CVF winter conference on applications of computer 2022
    Citations: 36

  • Pw-mad: Pixel-wise supervision for generalized face morphing attack detection
    N Damer, N Spiller, M Fang, F Boutros, F Kirchbuchner, A Kuijper
    Advances in Visual Computing: 16th International Symposium, ISVC 2021 2021
    Citations: 36

  • Performing indoor localization with electric potential sensing
    B Fu, F Kirchbuchner, J von Wilmsdorff, T Grosse-Puppendahl, A Braun, ...
    Journal of Ambient Intelligence and Humanized Computing 10, 731-746 2019
    Citations: 36

  • Beyond identity: What information is stored in biometric face templates?
    P Terhrst, D Fhrmann, N Damer, F Kirchbuchner, A Kuijper
    2020 IEEE international joint conference on biometrics (IJCB), 1-10 2020
    Citations: 35