Federal Agencies Navigate Tradeoffs Between AI Speed, Security
Agencies are deploying AI to drive mission outcomes, while managing challenges around security, data protection and oversight.
Federal agencies are moving quickly to adopt AI, but leaders say the race to innovate is tempered by complex security requirements, data sensitivity and governance constraints.
The U.S. Department of Agriculture (USDA) is using AI to enhance predictive analytics and accelerate response efforts during agricultural crises, Rudolf Rojas, IT program manager and security lead at USDA, said at GovCIO Media & Research’s CyberScape: The Federal Cybersecurity Summit last week in Arlington, Virginia.
During last year’s avian flu outbreak, which resulted in the loss of millions of birds and drove up poultry prices nationwide, USDA officials deployed AI-enabled systems to help contain the spread of the virus. Rojas the agency used automated data collection and analytics to identify and monitor high-risk locations.
The agency leveraged NASA’s National Agriculture Imagery Program (NAIP) data to map poultry farms and support predictive modeling. Using convolutional neural networks, the agency analyzed geographic data to identify where outbreaks were most likely to occur and where inspectors should be deployed.
“We’re using automated information systems and AI to create a data set of concentrated animal feeding operations. Then, inspectors will go to these farms and inspect or provide information [and] the latest technologies to deal with this. It was pretty successful operation that utilized the data, created the data stores, [and] analysts were able to provide the data to farms and essentially quell the avian flu and outbreaks.”
Balancing Innovation with Regulation
Not all agencies can move at the same pace, particularly those operating in highly sensitive or regulated environments.
The FBI has a unique set of challenges when it comes to integrating AI and is working to balance speed and innovation with regulatory constraints and parameters, according to Trish Janke, acting deputy assistant director in the FBI’s IT and database division.
“We want to get up to speed with the private sector and our other federal partners. We want to increase efficiency, and we want to be more effective as a workforce. But we also have to balance those challenges and roadblocks in the way with understanding that we do have different scenarios and different rules and regulations to follow by,” Janke said at the event.
The FBI is required to abide by the Constitution, like all federal agencies, but they also have guidelines from the attorney general and additional self-imposed restrictions related to AI use and development. The guidelines, which exist primarily to protect civil liberties, do create additional roadblocks that other government agencies don’t experience, Janke said.
“As much as we would love to say … an AI solution tomorrow, we can do it, we can roll it out … We have the standard governance and cybersecurity that we have to adhere to,” she said. “On top of that, making sure that we are protecting the rights of everyone in this room and American citizens.”
Janke said the bureau also has to be cautious when using third party and vendors due to classified material, personally identifiable information and sensitive data housed on their systems. Additionally, the FBI is a regular target for foreign adversaries, which adds additional self-imposed constraints for the agency. Finding vendors with appropriate clearances can slow the AI integration process. The FBI also, by design, has platforms that “do not necessarily talk to one another.”
“So a one and done, simple AI solution for something seriously as easy as a chatbot, is ridiculously more complex for us to be able to implement as an organization with the additional kind of restraints that we put on ourselves,” she said. “Like everyone in the room, we are all kind of learning and we’re moving with it.”
Navigating the Future of AI
Across the panel, speakers emphasized a shared challenge: balancing rapid technological advancement with the need for security, governance and public trust. As AI continues to evolve, panelists agreed that collaboration between government, industry and academia will be critical to navigating the road ahead.
Cheri Pascoe, director of the National Cybersecurity Center of Excellence (NCCoE) at NIST, discussed the organization’s efforts to address the evolving risks associated with AI. One major concern is visibility. Even organizations without formal AI policies may already have employees using AI tools, creating “shadow AI” risks that are difficult to monitor and secure.
Pascoe outlined several NIST initiatives, including the development of an AI-specific cybersecurity profile based on existing frameworks. The effort aims to provide practical guidance for securing AI systems, using AI in cybersecurity and defending against AI-enabled threats.
Another focus area is secure software development, particularly as AI becomes more involved in coding and code review processes. NIST is collaborating with industry partners to demonstrate best practices within DevSecOps pipelines.
“At the NCCoE, our mission is to accelerate the secure deployment of technology. We work collaboratively with industry technology vendors, critical infrastructure owners and operators to identify cybersecurity challenges and then work together to address them,” Pascoe said.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Agencies Urge ‘Trust and Verify’ as Supply Chain Cyber Risks Shift
Federal officials warn of growing supply chain risks, from small vendor gaps to human-targeted threats and limited partner visibility.
4m read -
Critical Infrastructure Attacks Push Agencies to Secure OT
Leaders say agencies must improve asset management and recovery efforts to defend OT and IoT environments from evolving cyber threats.
3m read -
Agencies Shift Toward Automated Identity Management to Bolster Zero Trust
Officials say disconnected clinics, global workforces and AI-driven devices demand architectures built around identity, not networks.
3m read -
White House Pushes ‘Action-Oriented’ Cyber Strategy to Deter Threats
ONCD's Seth McKinnis outlines action-focused strategy to impose costs on attackers and strengthen protections for American victims.
3m read