Skip to Main Content

Boundaries and Ethics are Critical for AI Management

NSF, Air Force leaders say that AI hybridized with quantum systems can be revolutionary, but responsible deployment will be key.

5m read
Written by:
“[AI] needs to be able to adapt to adversarial data and barriers to navigation, cyber, electromagnetic events. The list goes on,” Kimberly Sablon said at AFCEA’s TechNet Emergence summit on Tuesday in Reston, VA.
“[AI] needs to be able to adapt to adversarial data and barriers to navigation, cyber, electromagnetic events. The list goes on,” Kimberly Sablon said at AFCEA’s TechNet Emergence summit on Tuesday in Reston, VA. Photo Credit: AFCEA

Generative artificial intelligence can potentially enhance national security, economic prosperity and health care, according to government and industry experts at AFCEA’s TechNet Emergence summit on Tuesday in Reston, VA. Risk mitigation and trustworthy data are keys to success, leaders agreed as the panels examined AI’s current and future capabilities.

Generative AI is not new – it’s been around since the 1950s – but the explosion of awareness came with the release of ChatGPT in the fall of 2022, said National Science Foundation’s Special Assistant to the Director for AI Tess DeBlanc-Knowles. The AI revolution and the capabilities emerging affect everything from the way business is done to national security. Advances are moving rapidly and there must be policies in place to regulate AI to ensure safety, she said.

“President Biden’s executive order, released last October, represents the administration’s comprehensive approach to AI growth in terms of advancing activities that are managing the risks of AI, as well as those that are focused on harnessing the opportunities,” deBlanc-Knowles said. It includes ambitious deadlines, the formation of a new White House AI council designed to hold agencies accountable and tasks agencies with a series of responsibilities to ensure ethical AI deployment.

“It’s actually so comprehensive that it is the longest executive order in U.S. history,” she said.

There are more than just policy challenges, she added.

“[We need to manage] the socio-technical boundary,” she said. “These advanced systems are being rolled out to the public as soon as they are developed. It’s increasingly important to bring together interdisciplinary thinking around the impact of AI systems, on society, on communities, on individuals and on rights.”

Fifteen leading AI companies have volunteered to assist the White House with safety, security and trust issues involved in working with AI systems. This work includes analyzing testing information, sharing cybersecurity protections, promoting transparency of AI-generated content, researching society, societal risks and using AI to solve some of society’s biggest problems. The National Institute of Standards and Technology launched the first U.S. AI Safety Institute last fall and the House formed a bipartisan AI Task Force to consider legislation last month.

Department of Labor CIO Gundeep Ahluwalia said that AI will enhance employees’ job satisfaction levels by automating mundane tasks. He used the case of coding worker’s compensation claims manually as one example.

“After someone files a claim, for example, reporting that they fell off a ladder and hurt their head, now it throbs and hurts,” Ahluwalia said. “Now somebody’s got to code those things to be able to compare what is happening in the construction industry, as it’s been done fore decades. There are hundreds of those incidents. The coding aspect can be automated, and that person has time to do something more meaningful.”

All AI systems designed for use in military systems must be adaptive, robust and able to operate in diverse environments, said Kimberly Sablon, principal director for Trusted AI and Autonomy at the Office of the Undersecretary of Defense for Research and Engineering.

Military AI systems must have an interactive perspective and continuous testing across the entire machine learning pipeline, she said. AI must be hybridized with other technologies such as quantum systems to revolutionize data processing and learning.

“It’s not just a static AI-based system that’s been designed and deployed as it is,” Sablon said. “It must have the capability to learn continuously, even if it’s learning in very small doses [while using sparse data]. It’s machine learning at the edge.”

“It needs to be able to adapt to adversarial data and barriers to navigation, cyber, electromagnetic events. The list goes on,” she added.

AI capabilities combined with quantum computing could lead to a different approach to problem solving, effectively changing the way questions are asked, said Air Force Office of Scientific Research Program Officer Doug Riecken while discussing the importance of building practical solutions for various industries.

“Machines have to learn faster,” he said. “Does that mean we’re doing something new with an algorithm that’s never been done before? A different kind of structure, a different kind of slow cooker? That’s what I want to find out about, how it can influence the way that these things work differently.

Ahluwalia concurred, using climate change as an example.

“It’s not about reducing greenhouse gases anymore,” he said. “Maybe we should start thinking, if quantum meets AI, can I do photosynthesis in a synthetic manner? Maybe that is the way to go, rather than trying to convince people not to eat meat.”

“And there’s this potential for AI applied to the biological sciences, to apply to advanced materials discovery, to really supercharge that discovery process and enable us to tackle some really big challenges,” said deBlanc-Knowles. “We’ve seen that through [Google’s] DeepMind Alphafold. And you could imagine that being replicated across different fields of science and engineering, to really move us forward and enable us to develop whole new solutions to sustainability and climate change and the way that we manage agricultural resources.”

Prior to his current role, Reicken worked at Bell Labs. The first thing he said he told his people there is that they need to build the ability for the machine to explain itself to itself, a concept now known as explanation-based learning. He expressed his fears connected to AI’s potential evolution and reiterated the need for managing AI technology.

“There was a recent area of work, where the machines were developing a new language. And it was a language, they could talk to each other. And that could be a human, he never understood what they were saying,” Riecken said. “So, how are we going to monitor [AI so that] the machines are not going to have control?”

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe