Captcha Goes Real Time to Fool Machine Learning
The next thing in biometric authentication could be a new version of something already familiar to web users — Captcha, the tried, true and too often maddening method of convincing a site you’re not a bot by typing in distorted letters and numbers on the screen.
Researchers at the Georgia Institute of Technology, working with funding from the U.S. military, have developed what they call Real-Time Captcha, a technique that combines authentication techniques into one interaction, requiring the user to look into a smartphone’s camera and spot and answer a random question, while the program matches the user’s face and voice to pre-recorded settings. In essence, the program asks a question and then watches and listens as the user answers it.
The big advantage to the system, researchers say, is speed. Human users can get through the process faster than an artificial intelligence or machine reading program can answer the question, while finding and/or manipulating an image or voice recording. In tests with 30 subjects, human users handled the challenge in a second or less, while even the fastest machines took between 6 and 10 seconds, making it fairly easy to determine if the authentication is being attempted by a human or a bot.
“If the attacker knows that authentication is based on recognizing a face, they can use an algorithm to synthesize a fake image to impersonate the real user,” said Wenke Lee, a computer science professor and co-director of the Georgia Tech Institute for Information Security and Privacy. “But by presenting a randomly-selected challenge embedded in a Captcha image, we can prevent the attacker from knowing what to expect. The security of our system comes from a challenge that is easy for a human, but difficult for a machine.”
Captcha — a word originating as an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart — grew out of hackers’ efforts to slip through internet filters by substituting symbols for letters, and came into wide use in the early 2000s as a way to confound automated bots trying to pose as people when submitting a web form to, say, buy tickets, post a comment or sign up for a service.
But it has not always worked perfectly, to put it kindly. Sometimes, it doesn’t work at all.
As detailed at The Balance, some Captcha programs differentiate between upper and lower case, but some don’t. Use of some fonts can lead to confusion between an uppercase “I” (sounds like “eye”), a lowercase “l” (“el”) or the number 1. Zero and the letter O can get mixed up, as can numbers such as 2 and 5, and 6 and 8, when squiggled to extremes. Some programs also deter hackers by having the code expire fairly quickly, so after a few minutes, nothing you try will work, and the only option is to give up.
And as machines got better at beating Captcha, some programs only got more complicated. In 2014, Google abandoned Captcha and replaced with a button that says, “I am not a robot.” When it works, it works well, but many people really don’t like Captcha.
Nevertheless, hackers and their bots are still out there, and smartphone face-recognition systems can be hacked, so more than one level of authentication is necessary. And Real-Time Captcha, based on what Georgia Tech researchers have shown, could be somewhat different. Responses are verbal and visual, and are used in addition to image and audio authentication that could be stolen from devices or faked by modified images, video or voice recordings. It also has a short time limit machines can’t match.
“The attackers now know what to expect with authentication that asks them to smile or blink, so they can produce a blinking model or smiling face in real time relatively easily,” said Erkam Uzun, a Georgia Tech graduate research and lead author of the team’s paper, which was presented at the recent Network and Distributed Systems Security Symposium 2018 in San Diego. “We are making the challenge harder by sending users unpredictable requests and limiting the response time to rule out machine interaction.”
The challenges could be unscrambling letters or solving easy math problems, giving the answer before a machine could even recognize the question.
“Using face recognition alone for authentication is probably not strong enough,” Lee said, “We want to combine that with Captcha, a proven technology. If you combine the two, that will make face recognition technology much stronger.”
The Georgia Tech research is backed by the Office of Naval Research and the Defense Advanced Research Projects Agency, both of which have an interest in developing mobile device authentication techniques for the military.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
DOL's AI Chief Outlines Intersection of Tech and AI
DOL CAIO Mangala Kuppa is driving innovation, collaboration and ethical AI integration to enhance operational efficiency.
5m read -
DOD Cyber Crime Center Appoints New Executive Director
Lesley Bernys previously served as CIO at the Air Force Office of Special Investigations.
2m read -
Cyber Leaders Urge Congress to Modernize Election Security Systems
Experts prompt a bipartisan approach to cybersecurity to protect U.S. critical infrastructure and future elections from evolving threats.
4m read -
Defense Tech Developments to Watch in 2025
The new Fulcrum strategy sets up the Defense Department to shore up AI, zero trust and the workforce.
6m read