Q&A: Improving biometric systems using AI-based spoofing

Abdarahmane Wone, Software Engineer at Fime

As adoption of biometric authentication increases, so does the need to ensure that biometric systems are resistant to attacks. Presentation attacks, such as spoofing, which aim to “spoof” a biometric verification or identification procedure, can compromise biometric authentication. Fime is exploring how to transform genuine biometric images into synthetic spoofs and evaluate the robustness of biometric systems in detecting presentation attacks.

Stéphanie Pietri (SP), Communications Director at Fime, speaks to Abdarahmane Wone (AW), Software Engineer, about Fime’s new research paper to discuss the potential impact that digitally synthesized fingerprint spoofs can have on anti-spoofing systems.

SP: What is an anti-spoof test?

AW: Presentation attacks, when an attacker attempts to trick a biometric system, are one of the key security challenges facing biometric systems. It is critical that the presentation attack detection (PAD) technology in a biometric system is thoroughly tested, as this is what ensures the security of the system. Presentation attack detection testing is usually done by creating presentation attack instruments (PAIs) and performing active spoof attempts to determine whether a biometric system will authenticate a credential that is not genuine. This requires significant skill and time investment from testing labs.

SP: What did Fime do?

AW: To learn more about biometric systems’ ability to resist presentation attacks, Fime conducted research to determine whether digitally synthesized images are as good as real spoofs. AI and deep learning were used to transform genuine fingerprint images into spoof images similar to the ones made from the spoof materials commonly used in anti-spoofing tests. We did this in order to simulate the standard testing process.

We used a multi-domain style transfer model taking data from LivDet, an international competition of presentation attack and fingerprint liveness detection. Data from five different materials were used: Ecoflex, gelatin, latex, modasil, and wood glue. The data set was composed of a training set and a testing set, each containing 2000 images (1000 genuine images and 200 of each spoof material for each set). We extracted and randomly cropped multiple 224×224 patches from each image and injected them into the system to see if they were detected as spoofs under the NIST Fingerprint Image Quality (NFIQ) algorithm.

By using this kind of method, the testing process is sped up and a larger number of spoof materials are covered than it would be possible to physically fabricate in a given time.

SP: What was the impact of the digitally synthesized spoofs on the system?

To assess the validity of the digitally synthesized fingerprint spoofs, the NIST Fingerprint Image Quality (NFIQ) algorithm, which provides an overall score on a scale of 0 to 100, was used. This is based on the usability and features of an image. We used this algorithm to determine whether the quality of the presentation attack instruments was similar to that of the synthetic presentation attack images.

For each material, we found that there is a similarity between the distribution of the genuine images and synthetic images.

SP: What does this mean for the future of biometrics?

Fime has developed a method that can be used to evaluate biometric systems’ ability to resist fingerprint spoofs. This can help vendors to develop their fingerprint recognition products, in particular training algorithms to resist presentation attacks. Payment schemes can also use the research to implement new testing methodologies for these products. These findings will ultimately help laboratories to make cost and time savings, helping secure products launch more efficiently.

spot_img

Explore more