Skip to Main Content

AI Risk Framework Can Help Mitigate Machine-Learning Threats

New NIST guidance highlights key practices organizations should follow to significantly reduce cyber threats during and after the deployment of AI systems.

7m read
Written by:
AI Risk Framework Can Help Mitigate Machine-Learning Threats
Photo Credit: ozrimoz/Shutterstock

Cyber criminals can use artificial intelligence to disrupt systems, but federal agencies can stay ahead of those threats by operationalizing the NIST AI Risk Management Framework (AI RMF) in mapping, measuring and managing AI security risks, according to a new NIST report.

The guidance develops a taxonomy of concepts and outlines the “adversarial machine learning” (AML) landscape. The publication builds on NIST’s work on responsible and ethical integration of AI.

Chief AI Advisor and NIST Associate Director for Emerging Technologies Elham Tabassi told GovCIO Media & Reseach that the publication considers two general classes of AI technology: predictive and generative models. It also recognizes major classes of attack, according to the learning method and stage of the learning process when the attack is mounted, the attacker objectives, capabilities and knowledge.

“While the publication focuses on applied AI and practical guidance, it also provides clear information about the limitations of many mitigation techniques and provides relevant theoretical references for further reading. Altogether, this results in a realistic and useful resource for understanding and managing risks in AI across different applications of the technology,” Tabassi said.

The NIST report outlines four types of attacks: evasion, poisoning, privacy and abuse.

The publication also provides a description of how each attack impacts the behavior of AI systems. For example, evasion attacks typically happen after an AI system has been deployed to adjust an input to change how the system responds to it, while poisoning attacks usually take place in the training phase by introducing corrupted data in the training dataset of the AI model.

“Poisoning attacks are very powerful and can cause either an availability violation or an integrity violation. Availability poisoning attacks cause indiscriminate degradation of the machine learning model on all samples, while targeted and backdoor poisoning attacks are stealthier and induce integrity violations on a small set of target samples,” said Tabassi.

Privacy attacks, which occur during deployment, are attempts to learn sensitive information about the model or the data it was trained on to misuse it. Abuse attacks are specific to generative AI and broadly refers to when an attacker repurposes a system’s intended use to achieve their own objectives by way of indirect prompt injection.

The publication’s authors cited concerns around the relative ease of the deployment of the attacks.

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” co-author Alina Oprea, a professor at Northeastern University, told NIST. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”

NIST also provides information specifically to software developers and their organizations on understanding the risks associated with machine learning, including:

Software supply chain challenges related to detecting trojans in own or third-party software components.

Exposure of new attack surfaces to corporate information assets through new system architectures imposed by the introduction of large language models (LLMs) into the enterprise.

“For example, the retrieval augmented generation (RAG), which is a powerful technique for adapting a general-purpose LLM to the corporate business domain, exposes a new attack surface to the information contained in the vector database with confidential corporate information,” said Tabassi. “Software developers need to be aware of this and consider appropriate cybersecurity mechanisms for mitigating these risks.”

In addition, the report outlines the known limitations of the mitigation techniques to ensure organizations deploying the technology do not fall into a false sense of security and should instead resort to continuous monitoring as the NIST AI RMF guides. This guidance supports implementation of the NIST AI RMF, and organizations may use it in the context of and in conjunction with the NIST AI RMF.

The publication’s authors said that AI has evolved, but agencies should mitigate the potential for threats. The paper lays out, according to the authors, some guidance on disrupting AML.

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”

Woman typing at computer

Stay in the know

Subscribe now to receive our curated newsletters

Subscribe
Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe