5 Takeaways from AI FedLab

The federal community came together at AI FedLab for a day of networking and roundtable discussions on AI’s impact on national security, infrastructure modernization and innovation. Federal leaders, industry experts and AI researchers participated in interactive sessions covering critical topics such as generative AI, trustworthy AI, data privacy, risk management and cybersecurity to gain actionable insights on the future of AI in government.
Fill out the form below to catch up on the highlights.

Agencies shift toward a “one government” approach to tech innovation.

The General Services Administration (GSA) is playing a central role in modernizing how federal agencies acquire and implement AI.
New presidential priorities emphasize innovation and efficiency. GSA aims to streamline AI procurement and “lead by example,” according to Zach Whitman, GSA’s chief AI officer and chief data scientist.
Whitman said the agency’s primary goal is to empower government to “buy better, buy smarter and buy more efficiently.” GSA is encouraging a “one government” mindset, fostering cross-agency collaboration and opening up opportunities for smaller AI vendors to contribute.

AI comes with growing environmental and infrastructure costs.

Generative AI has introduced new environmental and infrastructure costs, and as the tech increases in popularity, IT leaders must consider new strategies to sustain it.
Kevin Walsh, director of IT and cybersecurity at the Government Accountability Office, emphasized the rapid pace at which generative AI systems consume and wear down hardware infrastructure.
“These large language models and generative AI chew through the hardware faster than I think people are realizing,” said Walsh.
An April GAO report included several recommendations for government’s AI use, including disclosing energy and water usage, developing mechanisms to assess ongoing impacts and establishing guidelines for responsible development and deployment.

Successful AI hinges on solid data infrastructure.

Federal technology leaders underscored the critical role of high-quality, well-governed and accessible data as the backbone for effective AI integration.
Agencies are developing clear frameworks for managing and understanding data to realize the full benefits of AI. Jacob Glassman, senior technical advisor for the Assistant Secretary of the Navy for Research, Development and Acquisition, highlighted ongoing efforts to simplify and streamline complex data environments to enable faster AI adoption.
For agencies like the Defense Department, efficient data handling has become a “national defense imperative,” said Glassman. He added that the slow pace of traditional acquisition processes makes the use of AI and improved data systems a critical capability in mission execution.

Government wants open competition and interoperability with new AI solutions.

Interoperability is key to ensuring modernization of systems without disruption, said Jaime Fitzgibbon, AI/ML program manager at Defense Innovation Unit.
“The last thing we want is to have a break in service. So, if that service is food stamps, we don’t want it to break right? If it is our veterans being seen at a VA hospital, we can’t have a break. But that doesn’t mean we don’t innovate. So how do you find that safe way in which you can start to move things over?”
She also emphasized the need for open competition in the federal marketplace to ensure agencies have access to the best solutions for specific mission goals, moving away from the “winner takes all” approach to purchasing tech.

Federal AI use cases are expanding and ready to scale.

The Federal Agency AI Use Case Inventory features 2,133 AI use cases as of Jan. 23, and agencies are now looking to scale successful AI implementations across government.
The National Institutes of Health (NIH) has over 100 AI use cases, according to Susan Gregurick, associate director of the Office of Data Science Strategy, at the agency.
NIH and the Energy Department released a new AI use case that could lead to early cancer detection. The agency worked with the supercomputing centers at Oak Ridge National Lab to publish their work on AI extraction of electronic pathways (ePath) reports.
“If you’re doing a diagnosis of cancer, it’s the ePath reports that are the key finding that give you that early diagnosis,” said Gregurick. “Well, your ePath reports are not connected to your EHR system, so being able to pull those out and link those to your EHR system and send them off to the cancer surveillance programs is really key for that very early diagnosis of cancer.”
