Skip to Main Content Subscribe

How AI Will Enhance the Federal Workforce, Not Replace It

Share

AI innovation directors and CISOs across agencies explain how they see the technology transforming manual processes.

7m read
Written by:
A panel of CISOs sit at a table to discuss AI.
ATARC's 2024 Federal AI Summit in Washington, D.C., featured government and industry leaders discussing the role of AI in the federal government. Photo Credit: GovCIO Media & Research

Artificial intelligence is revolutionizing how agencies operate and deliver services, and government needs to balance the complexities of the technology, officials said at an ATARC event last week. Keys to this effort as AI matures will include critical considerations like data ethics, algorithm transparency and training the workforce, federal IT leaders said.

Learning is Key to Security

AI can be used against cyberthreats. Department of Education Federal Student Aid CISO Davon Tyler said that AI has been a helpful tool for the workforce when transferring across systems, but pointed out the growing need for staff who better understand AI and its uses.

โ€œFailure is not an option. โ€ฆ Thereโ€™s always fear as we look out there with new tools and technology,โ€ Tyler said. โ€œAI is a conduit for helping us to secure [data] and when I talk to you, my family, my friends, my coworkers, I want to make sure they know Iโ€™m doing my best job.โ€

Many in the commercial sector are already using AI to mitigate security threats, noted Palo Alto Networks Defense Department ICF CTO Jim Smid. He used the example of credit cards declining after unusual activity.

โ€œHaving tools that are inherently looking at anomaly detection, being able to determine if those things are real, and thatโ€™s pervasive and every single tool that you use in the cybersecurity realm โ€” they all have to be using artificial intelligence,โ€ Smid said.

To strengthen cybersecurity, U.S. Citizenship and Immigration Services (USCIS) CISO Shane Barney said his team is using the technology to keep up with the rapidly changing threat environment. For example, the team uses AI automation to do risk-based vulnerability management.

โ€œThat, to me, is one of our big strategic goals for next year. Weโ€™re really looking to [generative] AI to make those decisions for us so that we can begin to apply risk to the vulnerability space because I canโ€™t patch at the speed thatโ€™s required,โ€ Barney said. โ€œThereโ€™s no way I can just continually patch my environment. I have to selectively do it and I have to do that based on risk.โ€

Barney said 60-65% of incidents are fully automated with the help of AI and machine learning, which allows USCIS to rearrange its tier 1 personnel to other important functions.

โ€œIt allowed us to start reinvesting resources into threat hunting and threat activities. And we expanded that across the security organizations more than just the [security operations center],โ€ Barney said.

Planting the Seeds for Ethical AI

Following President Bidenโ€™s executive order, agencies are creating AI policies and some have already had plans underway like the State Department. State Department Bureau of Global Talent Management CTO Don Bauer said his agencyโ€™s AI steering committee consists of technologists, lawyers and those from the financial department to ensure all staff are educated on the agencyโ€™s policies.

Bauer said many employees were excited to use AI, but there wasnโ€™t a community understanding of what it meant to his department. After a conversation with the AI innovation director, there was a training for all employees to discuss the use of AI and how to mitigate the bias that comes with it.

โ€œWeโ€™re trying to get in front of [ethical AI] with that type of training with our staff to kind of plant the seeds. [We need our staff to know] you can use it, but be a little more intentional and mindful about what youโ€™re doing, what kind of decisions you are making and keep the human in the loop,โ€ Bauer said.

Department of Energy Deputy CIO and Responsible AI Official Bridget Carper also said education has been an important element in her agencyโ€™s acceptance of AI. Carper said Energy first took a โ€œno AIโ€ approach, but soon began creating guardrails when they realized employees were using their personal devices.

After detailing the guardrails for staff, Energy put together an AI working group and began providing training on how to use it in tandem with supercomputers in agency labs.

โ€œIt was more education of itโ€™s not just bad, it can be used to supplement the individuals that are doing the monitoring because you canโ€™t monitor everything everywhere on the internet,โ€ Carper said.

Ensuring Ethical AI

AI can be used for mitigating cybersecurity threats and automation, but without transparency, accountability or ethical considerations, it falls short of its full potential. Pryon CEO Igor Jablokov said the reason many technologists were drawn to AI is because of its potential to help people, like translation services to combat language barriers or using lane detection to prevent deadly accidents from occurring.

โ€œWe didnโ€™t think about shoving as many ads in your faces as possible. We did not think about shoving videos in front of teenagersโ€™ eyes, and we did not certainly think about turning your heads into Barney the purple dinosaur with generative AI,โ€ Jablokov said.

Department of Veteran Affairs Presidential Management Fellow Tony Boese said trustworthy AI combines technical and ethical concerns to ensure there is transparency and understanding around how and why AI is being used. Boese noted that AI at the VA must be explainable to allow practitioners, project managers and patients to all โ€œunderstand what technology is available and what it can do.โ€

Boese called on developers to help in the fight to keep AI trustworthy and transparent.

โ€œLet us see your source code, let us see how things are made and give us absolutely all the details possible, because weโ€™re not your competition,โ€ Boese said. โ€œWe want to see everything because we want to make sure when we use it, that our doctors are going to be backed up and that theyโ€™re going to understand whatโ€™s going on and that our patients are backed up and theyโ€™re going understand whatโ€™s going on.โ€

Using and understanding AI also means understanding the need for multiple perspectives. National Oceanic and Atmospheric Administration CTO Frank Indiviglio said the traditional approach of solely using the scientific method with scientists, engineers and technologists wonโ€™t work for the future of AI.

โ€œGovernance is becoming a big discussion in the arena of responsible AI, it has to get more inclusive. Itโ€™s got to include a lot of different branches, itโ€™s not just science or technology or engineering โ€” it becomes legal, ethical,โ€ Indiviglio said.

Indiviglio and Boese both emphasized the importance of recognizing bias in data sets. Indiviglio said the discussion of data sets needs to happen between vendors and agencies, as well as throughout the agencies that use AI. AI might not be the right tool to use, or it might not be the right situation to use AI, Indiviglio said, and thatโ€™s why conversations need to happen.

โ€œThereโ€™s going to be a place where we always have humans in the loop and perhaps even humans a little more in some instances. Bias is never going to be removed. Bias mitigation is the best thing to possibly do,โ€ Boese said.

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe