Latest NewsTech

Researchers have made fake ‘Master’ fingerprints to unlock smartphones

AI can generate fake fingerprints that work as master keys for smartphones that use biometric sensors. According to the researchers who have developed the technique, the attack can be started against people with “some chance of success”.

Biometric IDs seem to be about as close to a perfect identification system as you can get. These types of IDs are based on the unique physical characteristics of individuals, such as fingerprints, irises or even the veins in your hand. In recent years, however, security researchers have shown that it is possible to fool many, if not most, forms of biometric identification.

In most cases, the spoofing of biometric IDs requires the creation of a fake face or a vein pattern of a finger that corresponds to an existing individual. However, in an article posted to arXiv earlier this month, researchers from the University of New York and the University of Michigan have explained how they have developed a computer learning algorithm to generate fake fingerprints that can serve as a match for a “large number of “real stored fingerprints. in databases.

These artificially generated fingerprints are known as DeepMasterPrints and resemble the master key of a building. To create a masterful fingerprint, the researchers fed an artificial neural network – a kind of computer architecture loosely modeled on the human brain that “learns” based on input data – the real fingerprints of more than 6,000 people. Although the researchers were not the first to consider making fingerprints, they were the first to use a machine learning algorithm to make prints with a working master.

A “generator” neural grid then analyzed these fingerprint images so that it could produce its own images. These synthetic fingerprints were then fed to a “discriminator” neural net that determined whether they were real or fake. If they were determined to be fake, the generator made a small adjustment to the image and tried again. This process was repeated thousands of times until the generator was able to successfully fool the discriminator – an arrangement known as a generative adversar network or GAN.

The master prints generated by the researchers are specifically designed to target the type of fingerprint sensor found in most modern smartphones. These capacitive fingerprint scanners usually only take partial measurements of fingerprints when they are placed on the sensor. This is particularly useful because it would be impractical to require a user to place his finger on the sensor every time he scans his printout in exactly the same way. The convenience of partial fingerprint measurements comes at the expense of security, which is useful for a sneaky AI.

The researchers used two types of fingerprint data for training their neural networks. One dataset used “rolled up” fingerprints that consisted of images scanned from prints that were inked on paper. The other dataset was generated based on capacitive sensors that are used to digitally capture a fingerprint. In general, the system was significantly better at protecting capacitive fingerprints than when spooling printouts on each of the three security levels.

Each security level is determined by the false match frequency (FMR) or the probability that the sensor incorrectly identifies a fingerprint as a match. The highest level of security gives only one agreement 0.01% of the time, the middle layer has an FMR of 0.1% and the lowest security level has an FMR of one percent.

At the lowest of the three layers of protection, the researchers were able to fool the sensor up to 76 percent of the time for digital prints with their master fingerprints. Although this is impressive, the researchers also note that it is “unlikely” that a fingerprint sensor works at such a low level of security. In the middle layer of security, where a sensor incorrectly identifies 0.1 percent of the time, which the researchers described as a ‘realistic security option’, they were able to emit digital fingerprints 22 percent of the time. As the researchers pointed out, this is a “much larger number of (impostor) matches than what the FMR could produce.”

At the highest security level, the researchers note that the main imprint is “not very good” to deceive the sensor – the master prints have fooled the sensor for 1.2 percent of the time.

Although this research does not mean the end of fingerprint ID systems, the researchers said it would ask the designers of these systems to reconsider the balance between convenience and security in the future.

Tags

Related Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Close
Close