Skip to Main Content Subscribe

Cyber Officials Target AI Efficiency, Acquisition Amid New Memos

AI can add efficiencies to the acquisition process as adversaries leverage the technology to advance cyber attacks.

5m read
Written by:
Deputy Director of Virginia Tech's National Security Institute Laura Freeman, Deputy Assistant Director for FBI Cyber Cynthia Kaiser, Chief AI Officer for Microsoft's National Security Group Paul Rodrigues and NRL's CHACS Deputy Director Lt. Cmdr. Ian Roessle speak at CyberScape Summit in Bethesda, Maryland, April 3, 2025.
Deputy Director of Virginia Tech's National Security Institute Laura Freeman, Deputy Assistant Director for FBI Cyber Cynthia Kaiser, Chief AI Officer for Microsoft's National Security Group Paul Rodrigues and NRL's CHACS Deputy Director Lt. Cmdr. Ian Roessle speak at CyberScape Summit in Bethesda, Maryland, April 3, 2025. Photo Credit: Invision Events

Federal agencies see artificial intelligence as a key technology in thwarting cyber adversaries, officials said during the CyberScape Summit in Bethesda, Maryland, Thursday. This comes amid the Office of Management and Budget’s two new AI memos released Friday that outline how government should approach AI acquisition and accelerate AI innovation.

“[AI] makes [adversaries] lies more believable,” said Cynthia Kaiser, deputy assistant director at FBI Cyber, at the event. “It helps them hide what they’re doing better. So being able to get onto a network and then use AI to map a network, to understand a network better, to move laterally across the network. … What you see are adversary programs that are better than they were a few years ago. It helps beginner hackers get to intermediate.”

The new memos echo President Donald Trump’s Jan. 23 executive order calling for removing barriers to AI.

“The United States is at the forefront of AI development, and agencies must adopt a forward-leaning and pro-innovation approach that takes advantage of this technology to help shape the future of government operations. Agencies are encouraged to harness solutions that bring the best value to taxpayers, increase quality of public services, and enhance government efficiency,” according to the memo on Accelerating Federal Use of AI.

The memo, in part, directs agencies to implement “minimum risk management practices for AI that could have significant impacts when deployed … and to prioritize the use of AI that is safe, secure and resilient.”

Bad actors are increasingly using AI to create more convincing deepfakes and manipulate language with greater sophistication, making it harder to detect misinformation and fueling new threats to public trust and national security.

Lt. Cmdr. Ian Roessle, deputy director of the Naval Research Laboratory’s Center for High Assurance Computer Systems (CHACS), added that AI is also increasing the speed of cyberattacks and adversaries’ movements.

“Adversaries are able to probe our defenses quicker, and in some ways, they can develop exploits faster as a result as well,” Roessle said. “We need to be using the same technology to our advantage, and that’s key. … Our true advantage here is our ability to know our own terrain because we’re holding that key terrain right now.”

Paul Rodrigues, chief AI officer at Microsoft’s National Security Group, anticipates a rise in AI agents. These agents enable adversaries to task a large language model to perform an action with a data source.

“The next 18 months are going to be extremely fast moving in the area of AI agents. … If [adversaries] can create a large language model or leverage an open-source model that has no guardrails, provide it some tasking to stop at nothing until it completes that task, that’s a dangerous thing. Something like, ‘here’s all the information from the Internet, take down the New York Stock Exchange.’ That is something an adversary might do with a large language model and an agent framework.”

AI Drives Efficiency

AI can help government develop new tools to help it move at the speed of threats while balancing policies.

“The efficiency aspect is not just about the ways in which AI can help us with our admin processes, [but also] how we can do things in this automated way that make us safer,” said Kaiser.

Microsoft views the future of AI integration into applications as “multi-model,” said Rodrigues.

“While there are millions of models out there … there’s an initial evaluation of those models for security, but the models are coming up continually. New versions of the same models will come out, and those may allow for efficiency or latency improvements, for reduction of cost, and we’re going to want to integrate them, and that requires a continual risk assessment on the models,” said Rodrigues.

The Naval Research Laboratory is experimenting with homomorphic encryption, a form of encryption that allows data to be processed and analyzed while still encrypted, without exposing the raw information. This enables secure computation on sensitive data, while preserving privacy and confidentiality.

“In theory, you can operate on encrypted data directly and maybe leverage more commodity resources in a less trustworthy environment,” Roessle said.

The Defense Department released new guidance in December for testing and evaluation (T&E). DOD also has a responsible AI strategy that outlines five ethical principles for the tech. It’s also critical to ensure policy doesn’t create roadblocks or unintended consequences.

“That’s an important challenge for us to be thinking about right now as we write these high-level statements of ‘must be fair, must be governable, must be traceable.’ How does that trickle all the way down? Way down into practice and how do we make sure that we don’t come up with our own unintended consequences through the policies that we set for ourselves?” said Laura Freeman, deputy director at Virginia Tech’s National Security Institute.

Freeman highlighted three areas for an AI-driven future: business processes, acquisition efficiency and operating as a joint force.

“Acquisition efficiency is a big bottleneck right now and then,” Freeman said. “[DOD and intelligence] are operating as a joint force, and so that has many information streams from different departments connecting the Defense Department, with the intelligence department and engineering right from the start so that we are building situational awareness.”

AI and Security by Design

Agencies also face new failure modes with AI. This is where resilience will help organizations overcome emerging challenges.

“Another very relevant piece is the cybersecurity and the attack surface that AI brings. We have a robust history of doing cyber testing on our systems that we’re building on, but that didn’t necessarily account for things like data poisoning, model inversion. Building up new systematic methods for how we deal with that increased attack surface is an important part,” Freeman said.

“It’s important to emphasize it’s not just operational resilience,” Roessle added. “A lot of what the secure-by-design efforts are all about.”

CISA’s Secure by Design initiative urges tech developers to prioritize security as a core feature from the start, rather than treating it as an afterthought, by building products that are resilient to exploitation. The initiative promotes accountability, transparency and collaboration across industry and government to reduce systemic cybersecurity risks.

Kaiser said sharing lessons learned will be key to combat new failure modes and prepare for the future national security landscape.

“Being able to harness that AI information that we’ll be able to get through the various technologies out there, then being able to share that back out, is really going to be a game-changer for government’s ability to help facilitate network defense and your ability to defend your networks,” Kaiser said.

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe