CBP Considers Generative AI in Border Security Systems
The agency is weighing privacy protections amid advancements in identity processes and generative AI development.
CBP has been developing models for predictive analytics for years stretching back to the mid 2000s, but it’s recently begun integrating interactive and generative AI into its applications as it considers privacy policy, said the agency’s tech chief at the Identity Week America conference in Washington, D.C., last week.
Artificial intelligence has the potential to improve border protections and enhance security for officers monitoring the people and goods that enter the country. But with the technology comes consideration of privacy.
Additionally, CBP has used neural networks that make decisions in technology like drones, prompting the agency to examine its privacy laws.
“When you make decisions, you have to make sure your privacy rights, security component, everything is taken care of. The United States is a nation of laws, and we absolutely take that very seriously across the board, both from biometric components all the way to data retention and everything else,” said CBP CTO Sunil Madhugiri.
Madhugiri outlined how CBP has been using image recognition, object detection and facial compilation as part of its non-intrusive inspection philosophy.
Generative AI has warranted development of a new approach within the agency to the technology, he added. Under CBP’s predictive models, the agency used and monitored data generated in house, but now it relies on pre-approved commercial providers to supply the data it needs for generative AI applications.
“These models — we’re not building them ourselves. They’re coming from a provider, either from the commercial area or from open source. It’s extremely critical that we take care of scans and everything else,” Madhugiri said. “Neural networks have biases. Humans write the neural networks. They figure out which path to choose, and based on that you can have biases. We are trying to make sure that those biases are the best we can have.”
Part of working with commercial providers is understanding how they acquire the data they use in their models to ensure its quality and security, said CBP Futures Identity Deputy Assistant Director William Graves.
“The days of ‘it’s secret sauce, we can’t tell you’ are over. You have to tell us, the government, how it’s making decisions,” Graves said.
Graves cautioned industry about using good data to train its models since bad models could have dire consequences for Department of Homeland Security agencies.
Even with generative AI, Madhugiri underscored the importance of keeping the human in the loop. He related a story of riding along with an officer with 15 years of experience who was able to identify a vehicle harboring illegal goods based purely on instinct.
“The officers are always in charge. Our job is to make sure to assist them with technologies, and then they make the decision,” Madhugiri said.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Looking Back at the First Trump Administration's Tech Priorities
In his first term, Donald Trump supported cybersecurity, space policy and artificial intelligence development.
4m read -
Labor CAIO Outlines Responsible and Ethical AI Priorities, Use Cases
Department of Labor Chief AI Officer Mangala Kuppa outlined how her role is shaping the agency’s artificial intelligence strategy.
20m watch -
Elevating Cybersecurity in the Intelligence Community
The Intelligence Community is developing strategies to protect data and strengthen resiliency against emerging cyber threats.
30m watch -
Trump Defense Pick Wants Faster Tech Procurement
Pete Hegseth, a combat veteran, has indicated his desire to streamline technology contracting and tackle China's threat.
4m read