Pentagon Explores Generative AI Utility in Combat
Officials are honing in on use cases and how to integrate the technology ethically.
The Defense Department has begun to explore ways the military can utilize generative artificial intelligence systems, such as ChatGPT. However, the technology has been subject to criticism from officials, who believe the tools have “limited utility” and could lead to the spread of disinformation.
Air Force Secretary Frank Kendall spoke about the use cases for AI within the agency on June 22. He believes the technology could assist with tasks that involve pattern recognition or targeting.
“It definitely can make us better, faster, stronger,” said Defense Intelligence Agency Director Lt. Gen. Scott Berrier about generative AI at an Intelligence and National Security Alliance event in May.
To explore the possibilities, Kendall asked his Scientific Advisory Board to assemble a small group to examine the military applications of AI systems. He also called for a more permanent team to learn how to safely integrate machine learning into the workspace as soon as possible.
While Kendall sees potential for AI at DOD, he said the technology currently can only be used in moderation as it can lead to error.
“[It] is not reliable, in terms of the truthfulness of what it produces,” he said at the INSA event about AI systems writing documents.
Regarding the use of generative AI, Kendall is not the only one with apprehensions. Berrier also emphasized that the technology should be used with caution.
Additionally, Craig Martell, the department’s chief digital and AI officer, voiced his concerns when asked about the topic at AFCEA’s TechNet Cyber conference earlier this year.
“Yeah, I’m scared to death,” he said. “[ChatGPT] has been trained to express itself in a fluent manner. It speaks fluently and authoritatively. So you believe it even when it’s wrong, … and that means it is a perfect tool for disinformation.”
Even though the use of AI within the military has resulted in a polarizing response, some feel it is necessary.
In his opening statement for Tuesday’s hearing regarding the department’s adopting and deploying AI, Scale AI CEO Alexandr Wang stressed that the U.S. should urgently look for AI use cases in government.
He explained that China is actively looking for ways to use AI in warfare. It is more concentrated on technology than the U.S. spending 10 times more on it when adjusted for the total military budget.
“We must intensify our efforts to outmatch China’s rapid advancements,” Wang wrote. “The United States is at risk of being stuck in an innovator’s dilemma because it is comfortable and familiar with investing in traditional sources of military power.”
While DOD will look to find new ways to use AI technology, such as to automate rote tasks aboard Navy ships, Kendall noted that humans will monitor the process to ensure the responsible use of generative systems.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Trump's HHS Secretary Pick Eyes Transparency, Data Access
Nominee Robert Kennedy wants to improve transparency and data access to empower patients and enable innovation in health care technology.
4m read -
Federal Leaders Revamp Tech Workforce, Policy
Despite the rise in interest of emerging technology, federal leaders see data, policy and the workforce as a best vehicle for change.
4m read -
Looking Back at the First Trump Administration's Tech Priorities
In his first term, Donald Trump supported cybersecurity, space policy and artificial intelligence development.
4m read -
Labor CAIO Outlines Responsible and Ethical AI Priorities, Use Cases
Department of Labor Chief AI Officer Mangala Kuppa outlined how her role is shaping the agency’s artificial intelligence strategy.
20m watch