Skip to Main Content Subscribe

Feds Turn to DevSecOps to Balance Innovation with Public Trust

Share

Officials embrace DevSecOps and a human-centered approach to modernize operations and manage risk as AI becomes more embedded in government.

4m read
Written by:
David Vergano, supervisory IT specialist at the Department of State, speaks at the 2025 Carahsoft DevSecOps Conference in Reston, Virginia, on July 29, 2025.
David Vergano, supervisory IT specialist at the Department of State, speaks at the 2025 Carahsoft DevSecOps Conference in Reston, Virginia, on July 29, 2025. Photo Credit: Hannah Duncan / Carahsoft

Federal agencies are leaning on DevSecOps to navigate the tension between innovation and accountability as AI becomes more deeply embedded in government operations. Leaders from the Departments of Energy, Labor, Veterans Affairs and State emphasized that building secure, responsive and human-centered systems is essential to earning public trust and delivering real impact during the 2025 Carahsoft DevSecOps Conference in Reston, Virginia, last week.

Improving User Experience with DevSecOps

VA and DOL are leveraging DevSecOps practices to modernize government platforms and improve citizens’ interactions with government.

Barbara Morton, deputy chief veterans experience officer at VA, highlighted how a recent redesign of the agency’s mobile app prioritized user experience, ensuring it was easier for veterans to navigate. But the focus, she said, remained firmly on human needs. Morton added that DevSecOps practices must continue to reflect that human-centered approach and respond to real-world demand signals.

“At the end of the day, we’re serving members of the public,” Morton told the audience. “You can’t automate empathy when you’re looking nose to nose with a veteran who’s in pain or suffering.”

Akanksha Sharma, director of digital transformation at DOL, said her agency redesigned its homepage to better serve users looking to engage with Labor’s services like changing their career or filing a workplace complaint with OSHA.

According to Sharma, DOL launched a redesigned beta in July with automated tooling and preconfigured security settings from the onset.

“We had an [information security officer] in our kickoff call who — as soon as we started to ideate around how we were going to build this beta site — started to work on the ATO process. [The information security officer] started to gather all of the necessary requirements to start the ATO paperwork [in] these multiple streams instead of being waterfall and moving one after the other.”

Sharma also said that DOL is working with private sector vendors and the White House to help DOL employees better understand agentic AI and how it can fold into operations.

“The point of it is to have fewer digital products. Perhaps some of them are centered around training like they are today, but around emerging fields so that people can either get apprenticeships or maybe we can redefine how we look at apprenticeships now in the modern world and get the training that they need,” Sharma said.

Ensuring Safety, Security in AI Implementation

Agencies are prioritizing safety and security as government continues to leverage AI, and DevSecOps provides a clear process to manage risk. David Vergano, supervisory IT specialist at the State Department, said agencies’ use of AI often depends on their risk tolerance and ability to manage unforeseen consequences.

“We do have a public trust. We are the government,” Vergano said. “We have to be very cognizant and careful about the data that we are putting into systems knowing where it’s going, know how it’s going to be used and how it may come out.”

He emphasized that DevSecOps is more than a framework; it’s an ongoing process that involves rigorous oversight, unit-level standards and continual conversations about responsible tool usage and data handling.

At the Department of Energy, DevSecOps Lead Kevin Byrne said his team applies safety practices drawn from nuclear operations to guide their approach to AI deployment. When uncertainty arises, for example, operators can initiate a “work pause” to halt activity until the system is fully reviewed and deemed safe.

According to Byrne, using AI in mission operations demands both discipline and awareness. Teams must recognize when AI is wrong and still be confident enough to use the technology effectively.

“There have always been disruptors to whatever the status quo is. Once upon a time there were people who were professionally calculators. Now I’ve got a smartphone for that. And so having the courage to say, ‘Yep, this is coming’ and embrace it is important,” Byrne said.

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe