DARPA Launches New Programs to Detect Falsified Media
The organization is using advanced tools to combat deepfakes and other manipulated media.
The Defense Advanced Research Projects Agency (DARPA) is developing innovative technologies and prototypes within its media forensics and semantics forensics programs to identify and combat falsified media, such as deepfakes.
Deepfakes use deep learning artificial intelligence (AI) to replace the likeness of one person with another in digital media. With the rise of deepfakes and synthetic media, adversaries are leveraging this technology to create fake news and misleading, counterfeit videos.
“Technical solutions are important in this space, but they don’t solve the entirety of the problem,” said Dr. Matt Turek, program manager and acting deputy director for DARPA’s Information Innovation Office, during AFCEA’s FedID Forum. “One of the analogies I use is ‘spam filtering.’ These sorts of tools might help us recognize malicious media and falsified media, but we also need policy approaches… and we need to build a more robust society that can look with appropriate skepticism at media.”
DARPA’s Media Forensics program (MediFor) builds algorithms to detect manipulated images or videos, then produces a quantitative measure of integrity, which enables filtering and prioritization of media at scale. The agency is focusing on three types of integrity: digital, physical and semantic.
MediFor uses detection algorithms, which analyze media content to determine if manipulation has occurred, and fusion algorithms, which combine information across multiple detectors to create a unified score for each media asset.
One of the MediFor prototypes targets cloud-based deployment to highlight the ability to process and triage media at large-scale.
“Each of the analytics produces a quantitative integrity score; that allows us to prioritize media by assets that have low scores. We created fusion algorithms that combined information across all the detection algorithms to come up with a summary integrity score,” Turek said.
DARPA also built deepfake defensive models to document how people move their head and facial muscles. The agency used this data and integrated it into a software tool to analyze if videos of “high-profile individuals,” such as the President, and compare behaviors with the real individual.
“The MediFor program largely ended last year, but we have a follow-on program called the semantic forensics, or SemaFor, program,” Turek said. “The focus of SemaFor is broader in two key ways than the MediFor program. It’s not just detecting manipulated media, but also looking to do attribution and characterization.”
SemaFor is developing new semantic technologies to automatically analyze multi-modal media assets to ultimately defend against large-scale, automated disinformation attacks. DARPA’s attribution algorithms will infer if digital media originates from a particular organization or individual, while characterization algorithms determine whether media was generated or manipulated for malicious purposes.
DARPA will then use these results to develop explanations for system decisions and prioritize assets for analyst review. Turek said innovative SemaFor technologies could help to identify, understand, and deter adversary disinformation campaigns.
“We’re about a year into this program and we’ve developed a number of interesting algorithms across the detection, attribution and characterization space,” Turek added.
“We work closely with a number of U.S. government transition partners that spans organizations within the Department of Defense, within the Intelligence Community, within federal law enforcement and even other agencies like Health and Human Services,” Turek said. “It’s in the best interest of the government for these tools to become widely available.”
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
HHS Accelerates AI, TEFCA in 2024
Micky Tripathi, tech policy and health IT leader, reflects on progress HHS has made with AI, data and TEFCA and outlines plans for 2025.
-
VA Focuses on Continuous Improvement for 2026 EHR Rollout
VA plans to resume rollout of its EHR in mid-2026, focusing recent feedback to drive continuous improvement amid the presidential transition.
4m read -
Navigating Federal Zero Trust Development
This panel explores the consistent steps agencies took to get here, and examines the future of zero trust.
26m watch -
Biometric Tech Leads Target Security, Inclusivity
DHS and NSF are advancing biometric technology to improve security, inclusivity and connectivity in challenging environments.
3m read