Projects in NGBS

Biometric sample quality assessment

Not all biometric samples are equally well suited for the automated recognition of individuals. The usefulness of a biometric sample for telling mated and non-mated samples apart (its “utility”) can be expressed by a quality score. The quality score can be used, e.g., for deciding whether the re-acquisition of a sample is deemed necessary, for weighting partial results in multi-biometric systems, or for selecting the best from a series of captured biometric samples. If a biometric sample is of low quality, the factors that led to the degradation of this sample should be returned in a quality vector. Based on the quality vector, corrections can be made to adjust the biometric sample to meet the requirements.

This project develops quality assessment algorithms for face images and other biometric modalities. Face images are stored in electronic passports and ID documents and are the most important means for binding an ePassport to its holder. Many security applications, e.g. automated border control systems and the European entry-exit system, can benefit from the development of methods for biometric sample quality assessment.


Biometric Tampering Detection

In the digital domain, any kind of image manipulation, often referred to as "photo shopping", can be performed. This also applies to biometric image data, in particular facial images. Manipulation techniques such as retouching or morphing can seriously impact the performance and security of biometric recognition systems.

The project is divided into two parts with complementary focus. In the first part, we explore approaches that aim at making facial recognition systems more robust against the aforementioned image changes. In the second part, detection mechanisms are designed to reliably detect face manipulations. In contrast to classical image forensics, it must be assumed here that images were heavily reworked, especially by printing and scanning facial images in the course of the issuance of an identification document. The aim is to determine whether a facial image has been retouched by means of an automated single image analysis and a differential analysis of a potentially manipulated facial image and a trustworthy unaltered facial image, e.g. from a live image. Finally, different techniques for integrating a prototypical detection module into a facial recognition system and their effect on the recognition performance of the overall system will be investigated.


Embedded Biometrics

The "Embedded Biometrics" project aims at enabling a wider integration of biometric technologies in embedded application scenarios and thus enhancing the quality of service provided for end-users. This is addressed by targeting two main research challenges. The first is designing accurate and small biometric artificial intelligence models that enhances the hardware cost and energy needs. The second is building adaptive, generalizable, and application aware recognition to allow robust operational capabilities. This should make biometric technology applicable in many possible mobile and embedded solutions where the quality of service is essential, such as the automotive domain, smart living environment, and smart wearables. The project goals focus on enhancing the efficiency while maintaining the accuracy, as well as, building application-aware robust recognition solutions. It is also concerned with sub-processes of the biometric decision making, including those related to preventing attacks such as presentation attacks.


Fair Biometric Systems

Biometric technologies have become a crucial component of various applications operated by governmental and commercial organisations. Example use cases include access control, border control, law enforcement and forensic investigations, voter registration for elections, as well as national identity management systems.

In recent years, reports of demographically biased (i.e. unfair towards certain demographic groups) biometric systems have emerged, fueling a debate about ethics and limitations of such technologies. Demographic bias in automated decision-making can be a serious issue in real applications, potentially causing issues ranging from simple inconveniences, through disadvantages, to lasting serious harm. In a broader context, algorithmic fairness is a pertinent topic in an emerging discourse on ethical design of artificial intelligence (AI) systems.

The goals of this project are to suggest and evaluate strategies which facilitate demographically fair biometric systems.


Secure Identity Management

Biometric solutions offer a wide range of convenience and security advantages in different  applications. However, various weaknesses of biometric systems have been identified in the past. The project "Secure Identity Management" focuses on the most relevant vulnerabilities and the corresponding attacks on biometric systems to improve their security. A systematic investigation is used to investigate and define these vulnerabilities, as well as to create countermeasures. The main focus in this project is to find application-oriented solutions. These investigated vulnerabilities are related to two main categories. First is the vulnerabilities of biometric systems to deployment conditions, where the investigated solutions aim at making biometric technologies more robust to real deployment environments. Second is the vulnerabilities of biometric systems to different types of attacks, such attacks target the security, privacy, convenience, and functionality of the biometric systems. 


Trustworthy Biometrics

In order to use biometric recognition as an identity management system, high biometric performance and robustness must be achieved. In other words, biometric recognition systems must prove to be trustworthy. Although the accuracy and convenience of biometric recognition systems has driven the replacement of traditional password or token-based authentication methods, it is important to shift attention away from a purely recognition accuracy and convenience mindset. In particular, the concerns of policy makers and the public regarding the reliability of biometric recognition systems, and enrolment devices in particular, need to be addressed. To achieve trustworthy biometrics, the TrustBio project addressed aspects of privacy, integrity and explainability of biometric systems.


Past Research Projects

Bias and Fairness in Biometric Systems

01.01.2020 - 31.12.2021

Biometric technologies have become a crucial component of various applications operated by governmental and commercial organizations around the world. Example use cases include access control, border control, law enforcement and forensic investigations, voter registration for elections, as well as national identity management systems.

In recent years, reports of demographically biased (i.e. unfair towards certain demographic groups) biometric systems have emerged, fueling a debate between various stakeholders on the use, ethics, and limitations of such technologies. Demographic bias in automated decision-making can be a serious issue in real applications, potentially causing issues ranging from simple inconveniences, through disadvantages, to lasting serious harm. In a broader context, algorithmic fairness is a pertinent topic in an emerging discourse on ethical design of artificial intelligence (AI) systems.

The goals of this project are twofold: firstly, to empirically identify and quantify the existing and potential demographic biases in biometric systems. Secondly, to suggest and evaluate strategies which facilitate demographically fair biometric systems.


Operational Challenges in Face Recognition

01.01.2019 - 31.12.2023

Face recognition is a long‐standing field of research, and a variety of methods have been proposed over the last three decades. In the recent past, the biometric performance of facial recognition systems has been significantly improved. This is largely due to general technological developments, in particular in the field of machine learning through its popularization and further development of artificial neural networks. However, various operational challenges arise from the use of face recognition.

This project comprises three objectives, which are essential for the operation of future facial recognition systems: The first is the development of robust quality metrics for facial images. The second is the development of an efficient identification system that significantly reduces the response times in a 1:N comparison (for very large N). The aim, in contrast to existing concepts, is to ensure that the recognition performance will not be reduced. The third objective is to design concepts that enable a data protection‐friendly storage of biometric references. It is essential that the security of these methods can be proven. To achieve this, mainly established cryptographic methods are considered, which allow a comparison of biometric templates in the encrypted domain. This is of particular importance to ensure a permanent protection of biometric reference data.