Pentagon Explores Generative AI Utility in Combat
Officials are honing in on use cases and how to integrate the technology ethically.
The Defense Department has begun to explore ways the military can utilize generative artificial intelligence systems, such as ChatGPT. However, the technology has been subject to criticism from officials, who believe the tools have “limited utility” and could lead to the spread of disinformation.
Air Force Secretary Frank Kendall spoke about the use cases for AI within the agency on June 22. He believes the technology could assist with tasks that involve pattern recognition or targeting.
“It definitely can make us better, faster, stronger,” said Defense Intelligence Agency Director Lt. Gen. Scott Berrier about generative AI at an Intelligence and National Security Alliance event in May.
To explore the possibilities, Kendall asked his Scientific Advisory Board to assemble a small group to examine the military applications of AI systems. He also called for a more permanent team to learn how to safely integrate machine learning into the workspace as soon as possible.
While Kendall sees potential for AI at DOD, he said the technology currently can only be used in moderation as it can lead to error.
“[It] is not reliable, in terms of the truthfulness of what it produces,” he said at the INSA event about AI systems writing documents.
Regarding the use of generative AI, Kendall is not the only one with apprehensions. Berrier also emphasized that the technology should be used with caution.
Additionally, Craig Martell, the department’s chief digital and AI officer, voiced his concerns when asked about the topic at AFCEA’s TechNet Cyber conference earlier this year.
“Yeah, I’m scared to death,” he said. “[ChatGPT] has been trained to express itself in a fluent manner. It speaks fluently and authoritatively. So you believe it even when it’s wrong, … and that means it is a perfect tool for disinformation.”
Even though the use of AI within the military has resulted in a polarizing response, some feel it is necessary.
In his opening statement for Tuesday’s hearing regarding the department’s adopting and deploying AI, Scale AI CEO Alexandr Wang stressed that the U.S. should urgently look for AI use cases in government.
He explained that China is actively looking for ways to use AI in warfare. It is more concentrated on technology than the U.S. spending 10 times more on it when adjusted for the total military budget.
“We must intensify our efforts to outmatch China’s rapid advancements,” Wang wrote. “The United States is at risk of being stuck in an innovator’s dilemma because it is comfortable and familiar with investing in traditional sources of military power.”
While DOD will look to find new ways to use AI technology, such as to automate rote tasks aboard Navy ships, Kendall noted that humans will monitor the process to ensure the responsible use of generative systems.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Cyber Leaders Urge Congress to Modernize Election Security Systems
Experts prompt a bipartisan approach to cybersecurity to protect U.S. critical infrastructure and future elections from evolving threats.
4m read -
Defense Tech Developments to Watch in 2025
The new Fulcrum strategy sets up the Defense Department to shore up AI, zero trust and the workforce.
6m read -
HHS Accelerates AI, TEFCA in 2024
Micky Tripathi, tech policy and health IT leader, reflects on progress HHS has made with AI, data and TEFCA and outlines plans for 2025.
-
Library of Congress, NARA Modernize Records Management with Emerging Tech
Natalie Buda Smith and Jill Reilly dive into the challenges of preserving and providing access to digital-native materials.
19m listen