Skip to Main Content

AI Revolutionizes Cybersecurity by Doing What Humans Cannot

Leaders from NSA, GAO and industry say that AI can augment the workforce, but the work must be auditable and explainable. 

4m read
Written by:
Tyson Brooks, technical director at the National Security Agency’s Artificial Intelligence Security Center, speaks at the GovCIO Media & Research AI Summit in Tysons, Va. on Nov. 7.
Tyson Brooks, technical director at the National Security Agency’s Artificial Intelligence Security Center, speaks at the GovCIO Media & Research AI Summit in Tysons, Va. on Nov. 7. Photo Credit: Capitol Events Photography

Cybersecurity officials highlighted ways that artificial intelligence can boost government defenses at the inaugural GovCIO Media & Research AI Summit in Tysons Corner, Virginia, on Thursday. AI augments the cyber workforce by doing things people cannot, according to Kevin Walsh, a director in the Government Accountability Office’s Information Technology and Cybersecurity team.

“Cyber is perfect for AI because of that, because of the quantity of great data that we have,” said Walsh. “The volume is unspeakable.”

Organizations need to be mindful of AI implementation and make the correct decisions about AI, according to Tyson Brooks, technical director at the National Security Agency’s Artificial Intelligence Security Center. He emphasized the importance of human-AI collaboration, noting that AI enhances threat detection and response but cannot replace human judgment.

“Just because you add an AI system or component, that’s not going to make it better,” said Brooks. “We’re going to need to have that human in the in the loop.”

Agencies need to be able to audit and explain AI systems, Brooks added.

“At the AI Security Center, we get down to the intricate levels of resiliency and trustworthiness, down to the mathematical layer. We want to understand the mathematics behind [any new AI system],” said Brooks. “Can you explain your mathematical equations on some of these algorithms that you’re proposing that the government used to secure their systems?”

The rapid pace of evolution remains a challenge to the explainability of systems, Walsh warned.

“I love the explainability. I love people actually being able to know,” said Walsh. “There are, however, some AI systems that that’s becoming increasingly difficult to do.”

AI’s efficacy and explainability heavily relies on the quality of data used to train it. Agencies need to ensure accurate and reliable AI outputs to secure data from manipulation and poisoning, Brooks said.

“If your data is corrupt, and your data’s been tended, then the decision that you’re going to make, that the AI system outputs, will be wrong,” Brooks said. “Those are the types of things that you also have to take into consideration…where you’re making life and death types of decisions on a daily basis, [you have to know] that your data is secure.”

Bad data threatens AI systems, Zscaler US Government Solutions Public Sector CTO Hansang Bae said, and cyber adversaries are learning how to use that to their advantage.

“There is a new horizon of cyber, which is data poisoning,” said Bae. “If you think that China’s not thinking about how to poison the data, and it’s so easy to do, just look at any open source or social media that awash in garbage information.”

The Artificial Intelligence Security Center provides guidance on how to keep data secure and safe for AI systems, Brooks said. Agencies, industry and academia are working together to find the best answers to these cybersecurity and AI questions, he added.

“We have to have this collaboration piece and understand the full spectrum of how the data can actually be manipulated and can be poisoned. And then we put the greatest minds who are working on this together to understand and provide the solutions and the guidance for keeping this type of data secure as well, too,” said Brooks.

AI, Walsh said, can give agencies and industry an edge on cyber adversaries by making cybersecurity personnel more efficient.

“In cyber, we’ve been caught up in the traditional cat and mouse game of good guys and bad guys always chasing the other and AI might give us more of an edge just because of the resources that we can bring to bear,” Walsh said. “It’s going to be hard for Russia and Iran, maybe not China, but for those hackers, to bring the kind of resources together that could attack at 3:00 a.m.”

Agencies rely on AI to protect systems, but resilience is crucial. Resilient systems can withstand attacks and recover quickly, minimizing damage, Bae added.

“We’re at a point where we can pivot and say cyber resilience,” said Bae. “I want the resilience part to kick in, because despite what you’re doing, [the AI system will say] ‘I will protect you. And then I will give you a heads up that you’re headed towards a crash. Your runway is running out, so I’ll give you warnings for that.’”

The future of cybersecurity and AI requires human oversight to provide the decision-making and nuance that is often lost, Brooks said. In national security environments, that oversight is paramount, he added.

“That human piece will have to be there, because that human logic, the common sense will have to play some kind of component before that final decision is actually made, especially when we’re talking about loss of life… scenarios,” Brooks said.

The workforce operates in tandem with AI, Bae said, to make sure small issues do not become major problems.

“Cyber AI is here to help the operator focus on the things that matter, even if it seemingly is small, because we know it’s going to sprout into an oak tree,” Bae said.

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe