DARPA Launches New Programs to Detect Falsified Media
The organization is using advanced tools to combat deepfakes and other manipulated media.

The Defense Advanced Research Projects Agency (DARPA) is developing innovative technologies and prototypes within its media forensics and semantics forensics programs to identify and combat falsified media, such as deepfakes.
Deepfakes use deep learning artificial intelligence (AI) to replace the likeness of one person with another in digital media. With the rise of deepfakes and synthetic media, adversaries are leveraging this technology to create fake news and misleading, counterfeit videos.
“Technical solutions are important in this space, but they don’t solve the entirety of the problem,” said Dr. Matt Turek, program manager and acting deputy director for DARPA’s Information Innovation Office, during AFCEA’s FedID Forum. “One of the analogies I use is ‘spam filtering.’ These sorts of tools might help us recognize malicious media and falsified media, but we also need policy approaches… and we need to build a more robust society that can look with appropriate skepticism at media.”
DARPA’s Media Forensics program (MediFor) builds algorithms to detect manipulated images or videos, then produces a quantitative measure of integrity, which enables filtering and prioritization of media at scale. The agency is focusing on three types of integrity: digital, physical and semantic.
MediFor uses detection algorithms, which analyze media content to determine if manipulation has occurred, and fusion algorithms, which combine information across multiple detectors to create a unified score for each media asset.
One of the MediFor prototypes targets cloud-based deployment to highlight the ability to process and triage media at large-scale.
“Each of the analytics produces a quantitative integrity score; that allows us to prioritize media by assets that have low scores. We created fusion algorithms that combined information across all the detection algorithms to come up with a summary integrity score,” Turek said.
DARPA also built deepfake defensive models to document how people move their head and facial muscles. The agency used this data and integrated it into a software tool to analyze if videos of “high-profile individuals,” such as the President, and compare behaviors with the real individual.
“The MediFor program largely ended last year, but we have a follow-on program called the semantic forensics, or SemaFor, program,” Turek said. “The focus of SemaFor is broader in two key ways than the MediFor program. It’s not just detecting manipulated media, but also looking to do attribution and characterization.”
SemaFor is developing new semantic technologies to automatically analyze multi-modal media assets to ultimately defend against large-scale, automated disinformation attacks. DARPA’s attribution algorithms will infer if digital media originates from a particular organization or individual, while characterization algorithms determine whether media was generated or manipulated for malicious purposes.
DARPA will then use these results to develop explanations for system decisions and prioritize assets for analyst review. Turek said innovative SemaFor technologies could help to identify, understand, and deter adversary disinformation campaigns.
“We’re about a year into this program and we’ve developed a number of interesting algorithms across the detection, attribution and characterization space,” Turek added.
“We work closely with a number of U.S. government transition partners that spans organizations within the Department of Defense, within the Intelligence Community, within federal law enforcement and even other agencies like Health and Human Services,” Turek said. “It’s in the best interest of the government for these tools to become widely available.”
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
DOD Can No Longer Assume Superiority in Digital Warfare, Officials Warn
The DOD must make concerted efforts to address cyber vulnerabilities to maintain the tactical edge, military leaders said at HammerCon 2025.
4m read -
DHA CDAO Spearheads Master Data Catalog to Boost Transparency
Jesus Caban plans to boost DHA's data maturity through a new master data catalog, governance frameworks and inventory of tech tools.
5m read -
IRS Makes Direct File Code Public as Lawmakers Debate Program’s Fate
The agency sees the Direct File source code as beneficial to government digital services despite what happens with it in proposed budgets.
5m read -
A Look at Federal Zero Trust Transformation
Recent developments from CISA and DOD show how government is advancing zero trust quickly.
20m read -
New Army Acquisition Plan Cites Autonomy, Predictive Analytics
Officials outline how the Army Transformation Initiative signals a broader shift toward efficiency with tech and acquisition reform.
4m read -
DOE National Labs Launch New AI Tools for Operational Efficiency
The Energy Department's National Laboratories are using AI to increase operational efficiency and drive research efforts forward.
3m read -
Human-AI Collaboration is Key to Secure Government Systems
Former CIA security chief emphasizes training and international standards for effective AI implementation.
23m watch -
Air Force, Coast Guard Talk Data Security Efforts for AI Development
The services' AI initiatives include efforts like creating clean training data, countering data poisoning and bridging siloed teams.
4m read -
Powering Defense with Transparent AI
AI and data innovation are transforming the Defense Department’s operations through cutting-edge initiatives.
20m read -
How Integrated Analytics Can Break Federal Data Silos
The Coleridge Initiative is leading the charge to modernize government data management, breaking down bureaucratic barriers by providing secure data access, advanced analytics and cross-agency collaboration tools.
11m watch -
Federal AI Infrastructure Requires a Smarter Foundation
Federal AI depends on a smarter infrastructure, from managing environmental impacts to improving data quality and workforce readiness.
4m read -
GSA Positions Itself as a Federal AI 'Enabler,' CAIO Says
CAIO Zach Whitman says the agency is focusing on "grounded practicality" in AI adoption throughout government.
5m read