Skip to Main Content Subscribe

What New Guidance Says For Securing Agentic AI Systems

Share

A multinational report outlines cybersecurity threats and best practices for deploying autonomous AI agents securely.

 

4m read
Written by:
Photo Credit: Department of Homeland Security

A new multinational document outlined steps for organizations to develop and deploy agentic AI tools and systems.

Careful Adoption of Agentic AI Services is co-authored by the Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA), the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre and the United Kingdom’s National Cyber Security Centre.

The document highlighted key cybersecurity challenges and risks associated with introducing agentic AI into IT environments, as well as best practices for securing agentic AI systems. The agencies strongly recommend organizations align agentic AI risks and mitigation strategies with their existing security model and risk posture.

Security Concerns

Agentic AI operates by integrating with software systems to create autonomous agents “that can independently reason, plan and take actions without requiring human intervention.” This autonomy and interactivity with other systems creates a range of security risks:

  • Privilege risk: Privileges assigned to agents “directly determine the level of risk they can introduce.” Poor management of privileges exposes agencies to privilege compromise, scope creep — where an agent gains more access rights than necessary for its function, identity spoofing and agent impersonation.
  • Design and configuration risks: Unvetted third-party components can carry excessive or unintended privileges, allowing a malicious actor to exploit this situation to execute unauthorized actions.
  • Behavior risks: Agentic AI agents may act unexpectedly, cause harm or become exploitable. This includes goal misalignment and unintended behavior where an agent pursues its goals in unanticipated ways, such as developing capabilities not explicitly designed or programmed.
  • Accountability risks: Agentic system architecture can obscure the reasoning behind an action, making accountability hard to trace. The models making up agentic systems can also make mistakes and can be prone to “hallucinations” – producing plausible-sounding responses when their internal knowledge is insufficient.

Best Practices

The report recommends proactive steps when designing, developing, deploying and operating agentic AI systems. These include:

  • Designing secure agents: Careful consideration should be given to system architecture to include security controls and tooling. This includes controlling the context for agentic AI agents by using a clear instruction hierarchy to ensure agent behavior meets intended priorities and limits.
  • Identity management: Developers are recommended to build each agent as a distinct, cryptographically anchored identity with its own unique keys and certificates.
  • Defense in depth: Because components of agentic AI systems can fail and potentially compromise the broader system, a defense-in-depth strategy helps avoid single points of failure.
  • Comprehensive testing: Testing strategies can improve an agent’s ability to identify and respond to undesirable behaviors by exposing it to instances of security abuse during supervised training.
  • Appropriate evaluation: Because AI agents operate autonomously in complex environments, they require more thorough evaluations than large language models. The report recommends evaluating systems across different levels of autonomy to understand performance and risk under changing environmental conditions.

Governance is Key

The report notes that governance policies are key to managing autonomous agents and for defining legal accountability and risk ownership for AI systems.

Maintaining strong guardrails through governance is important for organizations, Kevin Walsh, director of information technology and cybersecurity at the Government Accountability Office, told GovCIO Media & Research. He explained that as adversaries use agentic AI to attack networks, organizations will also have to use such tools.

“If the bad guys are going to use [agentic AI], the good guys are really going to be forced to respond at that same speed and use the same tools. So figuring out what guardrails we want and how they are going to come into play is going to be critical,” he said.

 

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe