Captcha Goes Real Time to Fool Machine Learning

The next thing in biometric authentication could be a new version of something already familiar to web users — Captcha, the tried, true and too often maddening method of convincing a site you’re not a bot by typing in distorted letters and numbers on the screen.
Researchers at the Georgia Institute of Technology, working with funding from the U.S. military, have developed what they call Real-Time Captcha, a technique that combines authentication techniques into one interaction, requiring the user to look into a smartphone’s camera and spot and answer a random question, while the program matches the user’s face and voice to pre-recorded settings. In essence, the program asks a question and then watches and listens as the user answers it.
The big advantage to the system, researchers say, is speed. Human users can get through the process faster than an artificial intelligence or machine reading program can answer the question, while finding and/or manipulating an image or voice recording. In tests with 30 subjects, human users handled the challenge in a second or less, while even the fastest machines took between 6 and 10 seconds, making it fairly easy to determine if the authentication is being attempted by a human or a bot.
“If the attacker knows that authentication is based on recognizing a face, they can use an algorithm to synthesize a fake image to impersonate the real user,” said Wenke Lee, a computer science professor and co-director of the Georgia Tech Institute for Information Security and Privacy. “But by presenting a randomly-selected challenge embedded in a Captcha image, we can prevent the attacker from knowing what to expect. The security of our system comes from a challenge that is easy for a human, but difficult for a machine.”
Captcha — a word originating as an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart — grew out of hackers’ efforts to slip through internet filters by substituting symbols for letters, and came into wide use in the early 2000s as a way to confound automated bots trying to pose as people when submitting a web form to, say, buy tickets, post a comment or sign up for a service.
But it has not always worked perfectly, to put it kindly. Sometimes, it doesn’t work at all.
As detailed at The Balance, some Captcha programs differentiate between upper and lower case, but some don’t. Use of some fonts can lead to confusion between an uppercase “I” (sounds like “eye”), a lowercase “l” (“el”) or the number 1. Zero and the letter O can get mixed up, as can numbers such as 2 and 5, and 6 and 8, when squiggled to extremes. Some programs also deter hackers by having the code expire fairly quickly, so after a few minutes, nothing you try will work, and the only option is to give up.
And as machines got better at beating Captcha, some programs only got more complicated. In 2014, Google abandoned Captcha and replaced with a button that says, “I am not a robot.” When it works, it works well, but many people really don’t like Captcha.
Nevertheless, hackers and their bots are still out there, and smartphone face-recognition systems can be hacked, so more than one level of authentication is necessary. And Real-Time Captcha, based on what Georgia Tech researchers have shown, could be somewhat different. Responses are verbal and visual, and are used in addition to image and audio authentication that could be stolen from devices or faked by modified images, video or voice recordings. It also has a short time limit machines can’t match.
“The attackers now know what to expect with authentication that asks them to smile or blink, so they can produce a blinking model or smiling face in real time relatively easily,” said Erkam Uzun, a Georgia Tech graduate research and lead author of the team’s paper, which was presented at the recent Network and Distributed Systems Security Symposium 2018 in San Diego. “We are making the challenge harder by sending users unpredictable requests and limiting the response time to rule out machine interaction.”
The challenges could be unscrambling letters or solving easy math problems, giving the answer before a machine could even recognize the question.
“Using face recognition alone for authentication is probably not strong enough,” Lee said, “We want to combine that with Captcha, a proven technology. If you combine the two, that will make face recognition technology much stronger.”
The Georgia Tech research is backed by the Office of Naval Research and the Defense Advanced Research Projects Agency, both of which have an interest in developing mobile device authentication techniques for the military.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Generative AI Demands Federal Workforce Readiness, Officials Say
NASA and DOI outline new generative AI use cases and stress that successful AI adoption depends on strong change management.
6m read -
The Next AI Wave Requires Stronger Cyber Defenses, Data Management
IT officials warn of new vulnerabilities posed by AI as agencies continue to leverage the tech to boost operational efficiency.
5m read -
Federal CIOs Push for ROI-Focused Modernization to Advance Mission Goals
CIOs focus on return on investment, data governance and application modernization to drive mission outcomes as agencies adopt new tech tools.
4m read -
Fed Efficiency Drive Includes Code-Sharing Law, Metahumans
By reusing existing code instead of rewriting it, agencies could dramatically cut costs under the soon-to-be-enacted SHARE IT Act.
5m read -
Agencies Push Data-Driven Acquisition Reforms to Boost Efficiency
New initiatives aim to increase visibility of agency spending, improve data quality and create avenues to deploy solutions across government.
5m read -
Data Transparency Essential to Government Reform, Rep. Sessions Says
Co-Chair of the Congressional DOGE Caucus Rep. Pete Sessions calls for data sharing and partnerships to reduce waste and improve efficiency.
5m read -
DOD Turns to Skills-Based Hiring to Build Next-Gen Cyber Workforce
Mark Gorak discusses DOD’s efforts to build a diverse cyber workforce, including skills-based hiring and partnerships with over 480 schools.
20m listen -
AI Foundations Driving Government Efficiency
Federal agencies are modernizing systems, managing risk and building trust to scale responsible AI and drive government efficiency.
40m watch -
Trump Executive Order Boosts HBCUs Role in Building Federal Tech Workforce
The executive order empowers HBCUs to develop tech talent pipelines and expand access to federal workforce opportunities.
3m read -
Navy Memo Maps Tech Priorities for the Future Fight
Acting CTO’s memo outlines critical investment areas, from AI and quantum to cyber and space, as part of an accelerated modernization push.
5m read -
DOD Can No Longer Assume Superiority in Digital Warfare, Officials Warn
The DOD must make concerted efforts to address cyber vulnerabilities to maintain the tactical edge, military leaders said at HammerCon 2025.
4m read -
New NSF Program Cultivates the Future of NextG Networks
The agency’s new VINES program looks to tackle key challenges like energy efficiency and future-proofing wireless tech.
21m watch