Pentagon Needs Trustworthy AI to Support Warfighters
Defense leaders are eying better governance and risk management as policy around ethical AI shapes up.
As the Defense Department accelerates use of advanced technologies such as artificial intelligence (AI), the need to build trustworthy AI systems is more critical than ever, especially when applying these technologies in the military realm.
For fiscal year 2024, DOD is seeking $1.8 billion to adopt and deliver AI capabilities. DARPA has been conducting AI research for more than 60 years and invested more than $2 billion in AI advancement over the past several years.
Recognizing the potential that AI can bring to the battlefield, defense leaders are pushing for good governance, risk management, regulations and policy in place as it is increasing the use of this technology to support mission-critical activities.
“We reach for the opportunity that AI provides us with what we need to reach with the other hand and manage the risks that will come with the application of that disruptive technology,” Coast Guard Vice Adm. Kevin Lunday, who commands the Atlantic Area, said at the 2023 Sea-Air-Space conference at National Harbor, Maryland. “When we train our officers … the first rule is one hand for yourself and one hand for the ship. … So that’s how I think about risk management as we reach for the opportunity.”
Defining what constitutes a trustworthy system is challenging, as trust is a multifaceted concept. Earlier this year, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework (AI RMF) to help federal agencies responsibly develop and deploy AI systems.
NIST defines a trustworthy AI system in 11 words: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced and fair with harmful bias managed.
“There’s a lot of meaning behind every single one of those 11 words,” Lunday said.
Experts say the path to trustworthy AI systems is long and complex, including factors such as improving resiliency to adversarial attacks or building the infrastructure to support these systems. Defining what it means to have a trustworthy system and how to measure success is fundamental to this journey.
“Human-to-machine interaction — that’s fundamental to trust; being able to define what do you mean by trust? … There are definitions out there in the research community,” DARPA Information Innovation Office Deputy Director Matt Turek said at Sea-Air-Space. “What are the levels of resources that we need to build state-of-the-art AI systems? What’s the impact on energy and climate from filling those large systems? How do we have AI systems that anticipate what humans need and are in alignment with human values? All of these, I think, are core challenges that we need to get out there and ultimately get highly trustworthy AI systems.”
While DOD seeks to take advantage of industry solutions, defense leaders say there are problems that the private sector will not have the answers for as industry’s needs are fundamentally different from national security needs.
“I think part of that is because there’s a fundamental misalignment between what the industry is doing and what DOD ultimately needs,” Turek said. “I think there are many compelling capabilities, … but industry isn’t focused on those sorts of life-and-death problems. They also have access to massive amounts of data and compute, and that’s not always the case for the sorts of problems we work on in the DOD. We care a lot about unusual events. Sometimes those are the ones we might care the most about, by definition, and there’s not a lot of training data available.”
Working through the challenges of defining what constitutes trustworthy systems or how to measure success hampers organizations from providing appropriate oversight and policies around the technology.
“I think one of the challenges from a policy perspective is, how do we construct regulations appropriately? I go back to that foundational science of how you measure and evaluate AI systems. You don’t have some of that foundational science,” Turek said. “It’s not like you say you need to have this level of trust score operating in this particular domain … so I think that creates challenges for policymakers.”
Guidances such as NIST’s framework equip organizations with resources to manage risks associated with development and deployment and promote responsible use of this technology. In creating this guidance, NIST worked with a wide range of experts, including psychologists, philosophers and legal scholars, to better understand the impacts AI has in real life.
“During the different stages of AI lifecycle through the design, development, deployment and regular monitoring of the systems, it’s really important to reach to a very broad sense of expertise … the tech community, but also … psychologists, sociologists, cognitive scientists to be able to help us understand the impact of the systems,” NIST Information Technology Laboratory Chief of Staff Elham Tabassi told GovCIO Media & Research.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Trump's Intelligence Pick Backs Cybersecurity, Tech Accountability
The former congresswoman has called for improving cyber defenses and advocated for accountability in federal tech and data practices.
2m read -
DHS Leads Government’s Largest Civilian AI Hiring Effort
On this AI GovCast miniseries, Boyce discusses his journey to the agency with his prior roles at the Office of Management and Budget.
15m listen -
Robotics is ‘Transforming’ Maritime Power, Navy Secretary Says
Carlos Del Toro calls for investment in digitization, robotics and tech that have transformed shipyards, ship production and operations.
3m read -
ODNI-UVA Partnership Develops Future Intelligence Workforce
The National Security Data and Policy Institute aims to bridge skills gaps and develop the intelligence community's next-gen workforce.
3m read