DHS Wants to Be an AI Early Adopter
“We have to celebrate it, and we have to be sober about it at the same time,” Robert Silvers said.
As the Biden administration prepares to release a highly anticipated executive order on artificial intelligence in the coming weeks, the Department of Homeland Security is developing its own set of priorities for responsible deployment of the nascent technology.
The agency is striving to be an “aggressive” adopter of AI while ensuring that there is good governance, risk management and policy in place to adopt trustworthy AI systems.
“We should be early and aggressive adopters of the technology. We should also be at the vanguard of establishing rules for responsible and ethical and safe use for our own programs. And so we’re doing both,” said DHS Undersecretary for Policy Robert Silvers at this week’s Institute for Critical Infrastructure Technology’s AI DC 2023: Securing America’s Future conference in Arlington, Virginia.
“We look at issues like fentanyl, which is killing so many Americans every year. It is so small and hard to detect. … AI is holding tremendous promise for how we can better target and interdict fentanyl. The same for security screening at airports, how you can streamline and make that more accurate,” he added. “Now, there are real risks when it comes to artificial intelligence. We have to be sober about that. But we also have to celebrate where we’re going.”
Silvers said that DHS is working to provide critical infrastructure firms with guidance on responsible deployment of AI. The agency will provide guidelines on testing and auditing front- and back-end systems, when to introduce a human in the loop and how to deal with major system failures.
The agency will work closely with industry partners to deliver guidance that will allow critical infrastructure companies to improve resiliency to adversarial attacks and provide more visibility into the risks of AI.
“We’re leading into it together with industry partners. I mean, we really have to be humble in the government about our level of understanding of this technology. It’s super complex. It’s super nuanced. And talent is at a premium. And we’re fighting for talent. The companies are fighting for talent. And it’s something that’s nascent. It will only succeed if we are literally shoulder to shoulder with the companies … that are developing the technology,” said Silvers.
Silvers also said that the agency’s Cyber Safety Review Board, a public-private partnership meant to provide the federal agencies and private sector with concrete recommendations after major cyber incidents, is currently reviewing the recent Chinese cyber operation that hacked government agencies’ Microsoft accounts.
The review board “is a truly public-private undertaking,” Silvers said.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
TSA Scientists: Biometrics Advancements Require Legislation, Standards
Biometric advancements over the past decade have impressed researchers, who aim to achieve more as the technology evolves.
4m read -
CBP Considers Generative AI in Border Security Systems
The agency is weighing privacy protections amid advancements in identity processes and generative AI development.
4m read -
CBP's Digital Identity Plan Hones in on Advanced Biometrics Tech
The agency is testing biometrics and AI technology to speed up traveler experiences and processing times at points of entry.
3m read -
Feds Prioritize Open-Source Software Security Initiatives
With the first open-source office established at CMS, a White House-led open-source group aims to advance many other initiatives in 2025.
3m read