5 Takeaways from the AI Summit
Federal leaders responsible for leading AI development, training and implementation discussed the latest efforts surrounding data security, funding, workforce development and next steps at the 2024 AI Summit in Tysons Corner, Virginia.
Sponsored by:
Fill out the form below to catch up on the highlights.
Staying ahead of adversaries requires strategically focused AI research.
Intelligence Advanced Research Projects Activity (IARPA) Director Rick Muller warned that future conflicts could involve AI-driven battles. This requires organizations like IARPA to innovate impactful solutions that would support anticipating adversaries’ use of AI. The agency is focusing on areas not addressed by existing research so funding can have a greater impact.
“What IARPA has to do is be very careful to understand what questions are important to the Intelligence Community that are not being answered or that will not be answered by the general market. That’s why we’re focusing on specific questions like AI compromises, AI biases,” Muller said. “We think those are areas where IARPA funding can have a larger impact because that’s not the area that the broader markets are going to be concerned about in the next year or two.”
AI can handle tasks beyond human capability.
AI has the potential to enhance cyber defenses by augmenting the workforce to handle tasks beyond human capability, like analyzing vast amounts of data, according to cybersecurity officials.
AI’s effectiveness relies on the quality of data used to train it, and panelists called attention to the risks of data poisoning. To stay ahead, leaders emphasized the need for explainability and resilient systems that can withstand attacks and recover quickly.
“We have to have this collaboration piece and understand the full spectrum of how the data can actually be manipulated and can be poisoned. And then we put the greatest minds who are working on this together to understand and provide the solutions and the guidance for keeping this type of data secure as well,” said Tyson Brooks, technical director at the National Security Agency’s (NSA) Artificial Intelligence Security Center.
Government should avoid falling victim to the tech hype.
There is value in balancing the excitement around AI with thoughtful, responsible innovation and workforce training. Agencies are focusing on creating a “digitally ready workforce” that spans beyond IT to include other divisions like acquisition, privacy and operations to foster a team-based approach.
“I do believe that AI will be demarcated between people that choose to use it well versus people that don’t,” said Office of Personnel Management CDO, Acting Chief AI Officer and Director of Enterprise Data and AI Taka Ariga. “Part of it is upskilling, reskilling, but also making sure that we have the talent pipeline to come in to make sure that the next generation of the federal mission delivery is based on a digital-ready workforce.”
Building an AI-capable workforce is key to competing in the “modern-day arms race."
Leaders from the Defense Innovation Unit (DIU) and Air Force Research Laboratory (AFRL) emphasized moving quickly and responsibly in AI development to remain competitive in the future battlefield, noting that while technology is crucial, workforce training and adoption are key to success.
Oftentimes, delays in AI progress are due to issues outside of technology itself and more in line with acquisition processes and workforce readiness.
“We’re in a modern-day arms race, and it’s not the technology that’s causing us to maybe slow down and not be as much ahead as our potential near-peer competitors and adversaries. We got to get people to embrace this. It’s also funding our acquisition cycle … the stuff that has nothing to do with technology, but is 100% going to allow us to either accelerate or put on the brakes,” said DIU AI/ML Program Manager Jamie Fitzgibbon.
Risk management strategies ensure responsible AI development.
Martin Stanley, AI and cybersecurity researcher at the National Institute of Standards and Technology’s (NIST) AI Innovation Lab, emphasized the need for agencies to measure the risks, impacts and potential harms of AI systems, rather than just focusing on performance metrics, to ensure responsible AI development and deployment.
Red Hat Chief Architect Adam Clater added that agencies can implement AI willfully to minimize risk.
“It just speaks to the need to just be very intentional,” said Clater. “As you bring each piece of AI into your mission arena, you have to be very intentional about evaluating what it’s doing and how it’s doing it.”