Skip to Main Content Subscribe

NIST to Release New AI Cybersecurity Guidance as Federal Use Expands

Share

NIST plans to release AI cybersecurity guidance within the year to support safe adoption as federal agencies expand use cases.

4m read
Written by:
NIST AI and Cybersecurity Researcher Martin Stanley shared how agencies can manage unique risks introduced by AI applications at GovCIO Media & Research's AI Building Blocks Workshop in Washington, D.C, on June 17 2025.
NIST AI and Cybersecurity Researcher Martin Stanley shared how agencies can manage unique risks introduced by AI applications at GovCIO Media & Research's AI Building Blocks Workshop in Washington, D.C, on June 17 2025. Photo Credit: Invision Events

Senior AI leaders are turning to security and efficiency as federal agencies continue to leverage AI in operations. Martin Stanley, AI and cybersecurity researcher at the National Institute of Standards and Technology (NIST), said on Tuesday the agency will release a new control overlay for the Special Publication 800-53 series “over the next six months to a year” during GovCIO Media & Research’s AI Building Blocks workshop in Washington, D.C.

“This will focus on [identifying] what the unique risks to AI systems are that cybersecurity can [help] with,” said Stanley. “Cybersecurity can contribute in a big way… [and identify if] models are being fooled or if training data is being stolen, or the models themselves are being stolen.”

Additionally, Stanley said a longer term project will be a cybersecurity framework profile for AI. In April, NIST hosted a workshop to discuss the overlay and develop a Cyber AI Profile under the NIST Cybersecurity Framework. From that meeting, NIST will develop a draft of the profile and send a request for information. He told the audience contributors “should expect at least two more opportunities to contribute at workshops and through public comment” and anticipate release in “nine months to a year.”

Advancing Federal AI Use Cases

IT officials are increasingly prioritizing security as federal agencies continue to integrate new use cases into daily operations. Senior officials from the General Services Administration (GSA) and the Navy Research Laboratory (NRL) outlined how practical experimentation, performance measurement and cross-agency collaboration are driving AI adoption in government during the workshop.

GSA Chief Data Scientist and Chief AI Officer Zach Whitman during the event said being an early adopter of the technology helped GSA develop a strong foundation for AI adoption across the federal government.

“A lot of [agencies] followed afterwards, and there was a lot of risk associated with that,” said Whitman. “A lot of work went into making sure that … could be mitigated, so we would avoid some of the fears [of AI].”

GSAi, the agency’s platform concept, combines a chatbot, an API platform and an administration console to monitor safety and evaluate performance. Whitman said GSA recognizes that this type of tool is needed across government and sees GSA as a top leader in AI adoption.

“We are in the business of building shared platforms, so we’re exploring what that would look like for other agencies so they could immediately bootstrap into a platform … and do so in a very safe and transparent way,” said Whitman.

Meanwhile, Naval Research Laboratory (NRL) is developing AI to help biologists, chemists and space scientists improve mission delivery, explained the lab’s AI Center Director David Aha. Working across the NRL’s 17 divisions, Aha and his team are increasing the use of quadruped robots to assist with naval ship maintenance. Aha said the team has improved the robot’s capabilities like natural language inputs to include spatial directions and better navigation.

“[It can] select tools autonomously from a pegboard, perform handoffs of these tools to humans, and that’s actually a very difficult task,” said Aha.

Bolstering Safe and Effective AI Use

IT leaders are involving the workforce and creating a strong AI culture to drive AI adoption as AI tools become more prominent in federal systems, according to Microsoft’s Federal Civilian Chief AI Officer Wole Moses. He highlighted the National Institutes of Health (NIH)’s AI community of practice, where roughly 350 people across multidisciplinary roles within technology met to learn and talk about AI.

“They called it a safe space for people of all levels to learn about AI capabilities, to talk about risks, to talk about concerns,” Moses said. “Having a culture starting from leadership that recognizes AI, recognizes and supports AI work … [can provide] encouragement for others … and stimulate development of AI use cases.”

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe