5 Takeaways from AI Building Blocks

This workshop equipped federal IT leaders with foundational AI knowledge, including key terminology and its relevance to their work. Attendees gained insights into real-world use cases, explored practical applications and learned how to ask the right questions to ensure responsible and effective AI integration in their agencies.
Fill out the form below to catch up on the highlights.

Federal AI strategies are evolving as agencies embed AI directly into mission execution.

The Central Intelligence Agency (CIA) has introduced a fourth pillar to its artificial intelligence strategy, marking a shift from understanding and internal use of AI to embedding it deeply into the agency’s core missions, including objective analysis, intelligence collection, covert action and counterintelligence.
According to CIA AI Director and Chief AI Officer Lakshmi Raman, this new pillar focuses on commoditizing AI, making the tech continuously accessible, so officers can seamlessly integrate AI into their daily workflows. This evolution reflects the agency’s commitment to democratizing AI access and moving toward sustained, mission-driven AI usage.
“How are we enabling access to AI? How are we democratizing access so that our data scientists or developers are able to access it very easily?” said Raman. “We’re thinking now beyond the step of democratization to commoditization. How are we commoditizing AI, so it’s available all the time so that our officers can integrate it into workflows that they use every day?”

Generative AI use cases are entering a new phase of maturity in government.

AI use case development across the federal government has rapidly evolved beyond experimentation into broader deployment, with agencies leveraging a spectrum of options: build, buy or extend, said Wole Moses, chief AI officer for Microsoft Federal Civilian. This shift signifies that agencies are now more equipped to align AI solutions directly with mission needs.
“About a year ago the questions were different. It was really build or buy,” Moses said. “Now there’s this new option of ‘extend.’ … Instead of build you can say create, buy becomes consume and extend is customize.”
This “build-buy-extend” framework enables agencies to better tailor generative AI to their missions. Wave one of federal generative AI implementation was marked by pilots focused on content summarization, unstructured data analysis and rulemaking comment review. In wave two, AI agents and reasoning models will begin to dominate use cases, bringing a higher degree of automation and task orchestration, Moses said.
“Wave two is where AI agent scenarios will start to come into play. … Reasoning models are very different from general-purpose chat models. They excel in STEM-related tasks like math, planning and logistics,” he said.

Effective AI governance requires human oversight and agile policies.

AI implementation across federal agencies demands a multifaceted approach with governance frameworks that can keep pace with rapid technological change. Taka Ariga, former chief AI officer and chief data officer at the Office of Personnel Management (OPM) and founder of Sol Imagination, emphasized that public sector leaders must approach AI as a team sport, integrating legal, policy, procurement and human capital functions, not just data science or IT.
“Governance frameworks must evolve to address AI-specific risks while enabling innovation and agility within federal agencies,” Ariga said.
Ariga highlighted the limitations of traditional governance methods in managing the fast pace of AI advancement. He called for “agile governance” structures that allow empowered teams to make micro-decisions regularly, rather than waiting for infrequent, broad leadership meetings. He also advocated for a shift from AI hype to value-driven implementation, urging agencies to consider whether AI is truly the best solution for a problem before deploying it.
“Nine out of ten times, you may actually have an alternative solution that doesn’t require the complexities of AI,” he said.

Sustainable AI growth depends on smart cost control and transparent data use.

Dave Erickson, distinguished architect at Elastic, emphasized that scalability in AI fundamentally boils down to efficiency, enabling organizations to accomplish more work for the same or lower cost. He urged government agencies to understand the cost components of AI, particularly the token-based pricing models and the expenses associated with data retrieval. This understanding is essential for budgeting and planning AI initiatives that can scale sustainably.
“Scalability is really just efficiency. It’s about how much work I can get done for the same amount of money. To make AI sustainable in government, we have to understand the true cost of asking questions at scale, ensure accountability through citation and traceability, and adopt a crawl-walk-run approach that builds momentum without overwhelming users.”
Erickson also stressed the importance of traceability throughout AI workflows. By tracking the sources behind AI-generated answers, organizations can reduce the risk of misinformation, increase accountability and uphold security standards aligned with zero trust principles. Establishing a culture and process around AI citation is key to building user trust and ensuring responsible AI use.
“Citations make AI knowledge real, and you need to manage the lifecycle of a citation the same way you manage the lifecycle of logging into a system or auditing something that’s auditable,” Erickson said.

Federal AI leaders prioritize security, collaboration and culture to ensure safe adoption.

Senior AI leaders across federal agencies are increasingly focusing on security and efficiency as they integrate AI into daily operations. Martin Stanley, an AI and cybersecurity researcher at the National Institute of Standards and Technology (NIST), announced that NIST will release a new control overlay for the Special Publication 800-53 series within the next six to twelve months.
“This will focus on [identifying] what the unique risks to AI systems are that cybersecurity can [help] with,” said Stanley. “Cybersecurity can contribute in a big way… [and identify if] models are being fooled or if training data is being stolen, or the models themselves are being stolen.”
Meanwhile, agencies like the General Services Administration (GSA) and the Naval Research Laboratory (NRL) are advancing practical AI use cases through experimentation, performance measurement and cross-agency collaboration. GSA’s early adoption of AI helped lay a foundation for government-wide deployment, the agency’s Chief Data Scientist and CAIO Zach Whitman explained.
“A lot of [agencies] followed afterwards, and there was a lot of risk associated with that,” said Whitman. “A lot of work went into making sure that … could be mitigated, so we would avoid some of the fears [of AI].”
