CyberWarfare / ExoWarfare

Machine Learning Can Create Fake ‘Master Key’ Fingerprints

Just like any lock can be picked, any biometric scanner can be fooled. Researchers have shown for years that the popular fingerprint sensors used to guard smartphones can be tricked sometimes, using a lifted print or a person’s digitized fingerprint data. But new findings from computer scientists at New York University’s Tandon School of Engineering could raise the stakes significantly. The group has developed machine learning methods for generating fake fingerprints — called DeepMasterPrints — that not only dupe smartphone sensors, but can successfully masquerade as prints from numerous different people. Think of it as a skeleton key for fingerprint-protected devices.

The work builds on research into the concept of a “master print” that combines common fingerprint traits. In initial tests last year, NYU researchers explored master prints by manually identifying various features and characteristics that could combine to make a fingerprint that authenticates multiple people. The new work vastly expands the possibilities, though, by developing machine learning models that can churn out master prints.

“Even if a biometric system has a very low false acceptance rate for real fingerprints, they now have to be fine-tuned to take into account synthetic fingerprints, too,” says Philip Bontrager, a PhD candidate at NYU who worked on the research. “Most systems haven’t been hardened against an artificial fingerprint attack, so it’s something on the algorithmic side that people designing sensors have to be aware of now.”

The research capitalizes on the shortcuts that mobile devices take when scanning a user’s fingerprint. The sensors are small enough that they can only “see” part of your finger at any given time. As such, they make some assumptions based on a snippet, which also means that fake fingerprints likely need to satisfy fewer variables to trick them.

The researchers trained neural networks on images of real fingerprints, so the system could begin to output a variety of realistic snippets. Then they used a technique called “evolutionary optimization” to assess what would succeed as a master print—with every characteristic as familiar and convincing as possible—and guide the output of the neural networks.

The researchers then tested their synthetic fingerprints against the popular VeriFinger matcher—used in a number of consumer and government fingerprint authentication schemes worldwide—and two other commercial matching platforms, to see how many identities their synthetic prints matched with.

 

“Most systems haven’t been hardened against an artificial fingerprint attack.” — Philip Bontrager, NYU

 

Fingerprint matchers can be set with different levels of security in mind. A top secret weapons facility would want the lowest possible chance of a false positive. A regular, consumer smartphone would want to keep obvious frauds out, but not be so sensitive that it frequently rejects the actual owner. Against a moderately stringent setting, the researcher team’s master prints matched with anywhere from two or three percent of the records in the different commercial platforms up to about 20 percent, depending on which prints they tested.

Overall, the master prints got 30 times more matches than the average real fingerprint—even at the highest security settings, where the master prints didn’t perform particularly well. Think of a master print attack, then, like a password dictionary attack, in which hackers don’t need to get it right in one shot, but instead systematically try common combinations to break into an account.

The researchers note that they did not make capacitive printouts or other replicas of their machine learning-generated master prints, which means they didn’t attempt to unlock real smartphones. Anil Jain, a biometrics researcher at Michigan State University who did not participate in the project, sees that as a real shortcoming; it’s hard to extrapolate the research out to an actual use case. But he says the strength of the work is in the machine learning techniques it developed. “The proposed method works much better than the earlier work,” Jain says.

The NYU researchers plan to continue refining their methods. They hope to raise awareness in the biometrics industry about the importance of defending against synthetic readings. They suggest that developers should start testing their devices against synthetic prints as well as real ones to make sure the proprietary systems can spot phonies. And the group notes that it has only begun to scratch the surface of understanding how exactly master prints succeed in tricking scanners. It’s possible that sensors could increase their fidelity or depth of analysis in order to defeat master prints.

“Even as these synthetic measures get better and better, if you’re paying attention to it you should be able to design systems that are at higher and higher resolution and aren’t easily attacked,” Bontrager says. “But it will affect cost and design.”

 

from: https://www.wired.com/story/deepmasterprints-fake-fingerprints-machine-learning/