Skip to Main Content Subscribe

CBP, Johns Hopkins Leaders Tout Operational Advances in AI

Share

CBP is using AI to streamline workflows and cut processing time, while prioritizing employee trust and adoption.

3m read
Written by:
CBP CAIO and Chief Innovation Officer Joshua Powell speaks at GovCIO Media & Research's AI in Action Workshop in Washington, D.C., on Aug. 14, 2025.
CBP CAIO and Chief Innovation Officer Joshua Powell speaks at GovCIO Media & Research's AI in Action Workshop in Washington, D.C., on Aug. 14, 2025. Photo Credit: Invision Events

Leaders from Customs and Border Protection and academia touted the impact of artificial intelligence development on operations. 

CBP, which manages over half of the Department of Homeland Security’s AI inventory, has seen great returns on using AI for applications in border monitoring and internal administrative work, according to the agency’s CAIO and Chief Innovation Officer Joshua Powell speaking at the AI in Action Workshop Thursday. 

“We’ve seen a 50% reduction in time it takes actually completing documents,” Powell said. “[HR is] going from between eight and 24 hours for document creation down to about, two to three hours.” 

CBP is focusing on workforce training to build trust with AI, boost adoption and bridge skill gaps. 

“It’s not a replacement for us within our jobs, but we need to find ways to leverage it, to just be more efficient in our day to day, so we don’t get bogged down in the smaller things that we’re doing,” said Powell. 

Powell said AI is decreasing employee burnout and increasing efficiency by enabling CBP’s workforce to focus on mission critical efforts and reducing monotonous tasks like data entry and monitoring camera feeds. 

The agency uses cameras to monitor remote areas along the border. Powell said monitoring these feeds manually was often a time-consuming and laborious process, with agents sometimes monitoring 50 to 70 camera feeds at once. Agents can use AI to help accurately identify human-bodies crossing the border. 

“A single person staring at the same camera over days on an end is something, and you can’t keep that going,” said Powell. “Bringing computer vision into that also helps us to say, ‘Hey, I’m just going to help you find the things you’re looking for.’” 

Partnering AI and Human Intelligence for Mission Delivery

Glenn Parham, CEO of GovBench and the inaugural technical lead for AI at the Defense Department, said agencies must hold AI models to the same standards as human workers, if not higher. For example, a chatbot used by the State Department should be able to pass a Foreign Service Officer exam. 

“These models need to use mission data in context. They need to beat human proficiency on required certifications and exams,” said Parham. 

Parham wants to see more AI adoption across all levels of government. He said agencies need to move past the “chatbot phenomenon” and use AI to its full potential to augment the federal workforce. He added AI models should also show the “thought process,” and agencies need to ensure the chat logs are treated like other official government correspondence. 

“We need to ensure that the AI is writing these immutable audit logs, these chats need to be treated like government records, just like email, just like any other communication,” said Parham. “We need to make sure that every single interaction is logged and traced.” 

American Institute of Artificial Intelligence CEO Ali Naqvi echoed Parham’s calls to treat AI as a human counterpart, adding that humans don’t think in a linear fashion and neither do AI models. This nonlinear thinking can be beneficial as AI becomes a “multitasker.”  

“That’s the new mindset that’s needed in America and government and businesses,” said Naqvi. “We need to shift from these single task use cases to multiple multitask learning models wherever we can. It’s not for everything.” 

Getting the AI Basics Right

In addition to ensuring teams are ready for AI, agencies need to drill down on basic data practices. Senior Analyst for Responsible AI at Johns Hopkins Applied Physics Lab Julie Obernauer-Motley said most AI projects fail because of poor-quality or inaccessible data assets. 

She outlined three lessons for success in AI: get the data right, ensure the platform works and involve users early. 

Obernauer-Motley recalled when she helped the Defense Department create a predictive maintenance algorithm to analyze 10 years of maintenance reports that all started from physical paper stored in file cabinets. 

Obernauer-Motley emphasized that for AI to work, data must be a living cycle that is routinely maintained. 

“We eventually had to tell [the command], you don’t have data, you have a fire hazard,” said Obernauer-Motley. “We say we’re going to put this into a data lake, which quickly becomes a data swamp because we don’t think how we’re going to clean it, we don’t think how we’re going to maintain it, or how we’ll sustain it.” 

AI applications have myriad use cases, and the technology touches nearly every industry, according to the Honorable Paul Grimm, Director at the Bolch Judicial Institute at Duke Law School.  

“You cannot avoid the use of artificial intelligence. It’s in health care, education, employment, finance, law enforcement, government, the military; this technology has exploded, and it will continue to explode. The uses, when properly used, are remarkable,” said Hon. Grimm. 

Obernauer-Motley said the prevalence of AI highlights the importance of early and continuous engagement in AI to ensure users, developers and operators are involved from the start. 

“The really important thing is understanding that these are human capabilities that humans have to be able to use,” said Obernauer-Motley. “If you don’t know what data you have, and if you don’t have a platform that works, it doesn’t matter how good your AI is, you fundamentally will not build a system that works. And until we can do those things, we’re not going to move ahead.” 

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe