Skip to Main Content Subscribe

NIST Says Risk Management is Central to Generative AI Adoption

Share

Agencies must prioritize a “risk-aware culture” and use the AI Risk Management Framework to deploy the tech effectively amid growing cyberattacks.

5m read
Written by:
Entrance of the Gaithersburg Campus of National Institute of Standards and Technology (NIST), a Physical sciences lab complex under U.S. Department of Commerce, January 30, 2021.
Entrance of the Gaithersburg Campus of National Institute of Standards and Technology (NIST), a Physical sciences lab complex under U.S. Department of Commerce, January 30, 2021. Photo Credit: grandbrothers/Shutterstock.com

As generative artificial intelligence rapidly permeates government and industry, the National Institute of Standards and Technology (NIST) is urging agencies to treat risk management as the central pillar of adoption, NIST AI and Cybersecurity Researcher Martin Stanley said during a recent panel.

“This is a tremendous opportunity for the various federal agencies that are deploying generative AI to create incredible efficiencies and provide greater services,” Stanley said during the Cloud Security Alliance’s AI and Data Security Cybersymposium last month.

Agencies, Stanley said, need establish a “risk-aware culture” to responsibly deploy generative AI. NIST’s AI Risk Management Framework (RMF), released in 2023, emphasizes the importance of trust and innovation. The framework is built around four functions — govern, map, measure and manage — that together, provide a structured way to identify risks, quantify impacts and make informed decisions. Stanley stressed that these functions are adaptable, allowing organizations to tailor them to their specific missions and contexts.

“We’ve gotten tons of comments back from the community, and we have a continued engagement around this ecosystem and how it’s evolving, and how the traditional software lifecycle really represents how generative AI systems are being used,” said Stanley. “That’s where our flexible and tailorable and adaptable guidance is at its best.”

Autio Strategies Founder & CEO Chloe Autio explained that the RMF was further refined in 2024 through the publication of a generative AI profile, known as AI 600-1. This profile enumerates 12 risks that are either unique to or exacerbated by generative AI, along with suggested mitigation actions.

“Generative AI risks can exacerbate and also be unique to AI risks,” she said. “And this is all consistent with the assumption or presumption that AI risks are different from traditional software risks.”

Inputs to generative systems are broader and more distributed than traditional AI, creating new vulnerabilities across the lifecycle, she added.

The AI profile went through an extensive public engagement period to identify the unique risks related to generative AI. Autio noted that the risks were carefully differentiated to avoid overlap, though many are interrelated. Among the most pressing are security and privacy risks, which Stanley described as the “number one vector” of concern.

Stanley warned that adversaries are already exploiting generative AI to accelerate cyberattacks. Data leakage, model inversion, and social engineering are occurring at machine speed, reducing attack timelines from days to seconds.

“We really need to understand what those risks are and make sure we have actions in place to mitigate them,” he said.

NIST’s recent workshop on AI and cybersecurity drew more than 5,000 participants, underscoring the urgency of the issue, he added.

Autio said that generative AI can also strengthen cybersecurity through advanced threat detection and automation of routine tasks, but cautioned that organizations must implement governance measures to manage third‑party risks. Because generative AI systems rely on a complex ecosystem of data providers and model developers, policies must address intellectual property, content provenance and monitoring for threats such as data poisoning or malware, she said.

“This broadened ecosystem … creates a lot more risk because there are many more third party actors, data providers and model providers that are doing different things as part of that process or that AI system,” Autio said. “Each carries their own inherent risks.”

Stanley underscored that risk management actions must be prioritized based on context of use. A use case may describe what a system does, but context considers the domain in which it operates.

“Image recognition in one context is going to be different than in another,” he explained. “Organizations must select mitigation strategies that align with their mission, stakeholder needs and risk tolerance.”

Stanley added that NIST will publish a workshop report on advancing a cybersecurity framework profile for AI, designed to integrate AI risk management into existing enterprise programs. The agency also plans to augment its widely used Special Publication 800‑53 cybersecurity control catalog with an overlay addressing AI vulnerabilities, he said. These resources will help organizations update protections without reinventing their risk management structures.

“This technology is going to be in our environments,” Stanley said. “As security practitioners, we don’t have the opportunity to tell people not to use it, because it’s going to be essential to be competitive in the marketplace.”

If organizations fail to embrace it responsibly, employees may turn to shadow IT solutions, creating further risks, he added.

Autio added that the generative AI profile is designed to be voluntary, adaptable and flexible, serving as a tool for organizations to identify and mitigate risks while harnessing innovation. She added that organizations need to understand risk to implement generative AI and that the NIST document can lead them there.

“There are a lot of different ways that [NIST’s RMF] can be helpful, whether it’s just needing to kind of think about what risks might be out there that you may not be considering, or actually getting down to more granular activities through the suggested actions that are mapped specifically to those risks,” she said.

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe