CBP Considers Generative AI in Border Security Systems
The agency is weighing privacy protections amid advancements in identity processes and generative AI development.
CBP has been developing models for predictive analytics for years stretching back to the mid 2000s, but it’s recently begun integrating interactive and generative AI into its applications as it considers privacy policy, said the agency’s tech chief at the Identity Week America conference in Washington, D.C., last week.
Artificial intelligence has the potential to improve border protections and enhance security for officers monitoring the people and goods that enter the country. But with the technology comes consideration of privacy.
Additionally, CBP has used neural networks that make decisions in technology like drones, prompting the agency to examine its privacy laws.
“When you make decisions, you have to make sure your privacy rights, security component, everything is taken care of. The United States is a nation of laws, and we absolutely take that very seriously across the board, both from biometric components all the way to data retention and everything else,” said CBP CTO Sunil Madhugiri.
Madhugiri outlined how CBP has been using image recognition, object detection and facial compilation as part of its non-intrusive inspection philosophy.
Generative AI has warranted development of a new approach within the agency to the technology, he added. Under CBP’s predictive models, the agency used and monitored data generated in house, but now it relies on pre-approved commercial providers to supply the data it needs for generative AI applications.
“These models — we’re not building them ourselves. They’re coming from a provider, either from the commercial area or from open source. It’s extremely critical that we take care of scans and everything else,” Madhugiri said. “Neural networks have biases. Humans write the neural networks. They figure out which path to choose, and based on that you can have biases. We are trying to make sure that those biases are the best we can have.”
Part of working with commercial providers is understanding how they acquire the data they use in their models to ensure its quality and security, said CBP Futures Identity Deputy Assistant Director William Graves.
“The days of ‘it’s secret sauce, we can’t tell you’ are over. You have to tell us, the government, how it’s making decisions,” Graves said.
Graves cautioned industry about using good data to train its models since bad models could have dire consequences for Department of Homeland Security agencies.
Even with generative AI, Madhugiri underscored the importance of keeping the human in the loop. He related a story of riding along with an officer with 15 years of experience who was able to identify a vehicle harboring illegal goods based purely on instinct.
“The officers are always in charge. Our job is to make sure to assist them with technologies, and then they make the decision,” Madhugiri said.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Trump's Return to Office Sparks Focus on AI Infrastructure
A potential AI czar and prior AI executive orders lead to new considerations for R&D and energy infrastructure.
7m read -
VA Focuses on Continuous Improvement for 2025 EHR Rollout
VA plans to resume rollout of its EHR in FY 25, focusing recent feedback to drive continuous improvement amid the presidential transition.
4m read -
Trump's Intelligence Pick Backs Cybersecurity, Tech Accountability
The former congresswoman has called for improving cyber defenses and advocated for accountability in federal tech and data practices.
2m read -
DHS Leads Government’s Largest Civilian AI Hiring Effort
On this AI GovCast miniseries, Boyce discusses his journey to the agency with his prior roles at the Office of Management and Budget.
15m listen