Skip to Main Content

How AI Will Enhance the Federal Workforce, Not Replace It

AI innovation directors and CISOs across agencies explain how they see the technology transforming manual processes.

7m read
Written by:
A panel of CISOs sit at a table to discuss AI.
ATARC's 2024 Federal AI Summit in Washington, D.C., featured government and industry leaders discussing the role of AI in the federal government. Photo Credit: GovCIO Media & Research

Artificial intelligence is revolutionizing how agencies operate and deliver services, and government needs to balance the complexities of the technology, officials said at an ATARC event last week. Keys to this effort as AI matures will include critical considerations like data ethics, algorithm transparency and training the workforce, federal IT leaders said.

Learning is Key to Security

AI can be used against cyberthreats. Department of Education Federal Student Aid CISO Davon Tyler said that AI has been a helpful tool for the workforce when transferring across systems, but pointed out the growing need for staff who better understand AI and its uses.

“Failure is not an option. … There’s always fear as we look out there with new tools and technology,” Tyler said. “AI is a conduit for helping us to secure [data] and when I talk to you, my family, my friends, my coworkers, I want to make sure they know I’m doing my best job.”

Many in the commercial sector are already using AI to mitigate security threats, noted Palo Alto Networks Defense Department ICF CTO Jim Smid. He used the example of credit cards declining after unusual activity.

“Having tools that are inherently looking at anomaly detection, being able to determine if those things are real, and that’s pervasive and every single tool that you use in the cybersecurity realm — they all have to be using artificial intelligence,” Smid said.

To strengthen cybersecurity, U.S. Citizenship and Immigration Services (USCIS) CISO Shane Barney said his team is using the technology to keep up with the rapidly changing threat environment. For example, the team uses AI automation to do risk-based vulnerability management.

“That, to me, is one of our big strategic goals for next year. We’re really looking to [generative] AI to make those decisions for us so that we can begin to apply risk to the vulnerability space because I can’t patch at the speed that’s required,” Barney said. “There’s no way I can just continually patch my environment. I have to selectively do it and I have to do that based on risk.”

Barney said 60-65% of incidents are fully automated with the help of AI and machine learning, which allows USCIS to rearrange its tier 1 personnel to other important functions.

“It allowed us to start reinvesting resources into threat hunting and threat activities. And we expanded that across the security organizations more than just the [security operations center],” Barney said.

Planting the Seeds for Ethical AI

Following President Biden’s executive order, agencies are creating AI policies and some have already had plans underway like the State Department. State Department Bureau of Global Talent Management CTO Don Bauer said his agency’s AI steering committee consists of technologists, lawyers and those from the financial department to ensure all staff are educated on the agency’s policies.

Bauer said many employees were excited to use AI, but there wasn’t a community understanding of what it meant to his department. After a conversation with the AI innovation director, there was a training for all employees to discuss the use of AI and how to mitigate the bias that comes with it.

“We’re trying to get in front of [ethical AI] with that type of training with our staff to kind of plant the seeds. [We need our staff to know] you can use it, but be a little more intentional and mindful about what you’re doing, what kind of decisions you are making and keep the human in the loop,” Bauer said.

Department of Energy Deputy CIO and Responsible AI Official Bridget Carper also said education has been an important element in her agency’s acceptance of AI. Carper said Energy first took a “no AI” approach, but soon began creating guardrails when they realized employees were using their personal devices.

After detailing the guardrails for staff, Energy put together an AI working group and began providing training on how to use it in tandem with supercomputers in agency labs.

“It was more education of it’s not just bad, it can be used to supplement the individuals that are doing the monitoring because you can’t monitor everything everywhere on the internet,” Carper said.

Ensuring Ethical AI

AI can be used for mitigating cybersecurity threats and automation, but without transparency, accountability or ethical considerations, it falls short of its full potential. Pryon CEO Igor Jablokov said the reason many technologists were drawn to AI is because of its potential to help people, like translation services to combat language barriers or using lane detection to prevent deadly accidents from occurring.

“We didn’t think about shoving as many ads in your faces as possible. We did not think about shoving videos in front of teenagers’ eyes, and we did not certainly think about turning your heads into Barney the purple dinosaur with generative AI,” Jablokov said.

Department of Veteran Affairs Presidential Management Fellow Tony Boese said trustworthy AI combines technical and ethical concerns to ensure there is transparency and understanding around how and why AI is being used. Boese noted that AI at the VA must be explainable to allow practitioners, project managers and patients to all “understand what technology is available and what it can do.”

Boese called on developers to help in the fight to keep AI trustworthy and transparent.

“Let us see your source code, let us see how things are made and give us absolutely all the details possible, because we’re not your competition,” Boese said. “We want to see everything because we want to make sure when we use it, that our doctors are going to be backed up and that they’re going to understand what’s going on and that our patients are backed up and they’re going understand what’s going on.”

Using and understanding AI also means understanding the need for multiple perspectives. National Oceanic and Atmospheric Administration CTO Frank Indiviglio said the traditional approach of solely using the scientific method with scientists, engineers and technologists won’t work for the future of AI.

“Governance is becoming a big discussion in the arena of responsible AI, it has to get more inclusive. It’s got to include a lot of different branches, it’s not just science or technology or engineering — it becomes legal, ethical,” Indiviglio said.

Indiviglio and Boese both emphasized the importance of recognizing bias in data sets. Indiviglio said the discussion of data sets needs to happen between vendors and agencies, as well as throughout the agencies that use AI. AI might not be the right tool to use, or it might not be the right situation to use AI, Indiviglio said, and that’s why conversations need to happen.

“There’s going to be a place where we always have humans in the loop and perhaps even humans a little more in some instances. Bias is never going to be removed. Bias mitigation is the best thing to possibly do,” Boese said.

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe