5 Takeaways from AI FedLab
Officials from industry, academia and government got together in Reston, Virginia, on June 12 to discuss the latest advancements in artificial intelligence at the inaugural GovCIO Media & Research AI FedLab. Through roundtable discussions, conversations and panels, IT leaders examined how federal agencies can manage AI implementation within their operations.
Check out the highlights:
Identify the problem AI needs to solve.
Artificial intelligence is often sold as the cure-all for any IT problem, but it is not applicable for every potential challenge facing government. New systems come with costs and are not magic. CIA CAIO and Director of AI Lakshmi Raman suggested asking an important question before going forward with AI.
“I think the first thing that we are thinking about is: is AI the right solution for this problem?” Raman said. “The answers to that question could take you down the paths that you need to go down in order to create your solution.”
AI is inherently insecure.
Agencies should heed concerns about the security of AI. Researchers say these systems may be “inherently insecure.”
“Government needs to pay attention to trends, stay current and invest more into this development because, otherwise, you’re going to fall behind either our own commercial capabilities or capabilities developed in other countries,” said Oak Ridge National Laboratory’s new Center for AI Security Research (CAISER) Director Edmon Begoli. “The AI itself is inherently insecure, and it is a self-acting system. It’s a system that’s insecure.”
A collaborative approach to AI is key.
As agencies implement the White House executive order requiring agencies to use AI safely and responsibly, communication and collaboration are critical, said Treasury Department Deputy CAIO and CTO Brian Peretti. Learning from industry and peers across government is key to moving AI forward.
“We’re thinking about how to do these two things. Both internally, go across Treasury, and then externally to the [financial] sector itself, seeing how they’re using it – then thinking how we can connect better together,” Peretti said.
Consider the organization’s culture and business and adjust appropriately.
Adopting AI requires change. Officials need to be able to convince agency leadership that a system is worth it and that the workforce can shift toward a more creative-thinking culture to implement it.
“We need people who can translate technology to business value because if you can’t sell it to leadership in a way that they understand, they won’t get it,” said Air Force Research Laboratory AI Lead Amanda Bullock. “With a lot of new innovations coming out, we need people to think creatively, which is a shift in traditional government.”
AI cannot fly solo. A human must verify data accuracy for generative AI tools.
Agencies need to hone in on generative AI and the potential for hallucinations. Generative AI can be helpful for some tedious tasks. The nature of the AI is that it is accuracy-agnostic, and its purpose is to “just generate information.”
“If you’re relying on the data that’s coming out, you have to be cautious about what it is. The information that comes out may be accurate or it may not be,” said Department of Homeland Security Deputy CTO for AI Chris Kraft. “You may ask about a war that happened in 1912, and it’ll make something up because it just wants to generate text. You need to be very cautious, and you need to validate that.”