Federal Agencies Explore AI Use Cases to Prevent Bias
Critics worry AI will exacerbate disparities, but some federal AI officials believe the technology can identify biases and promote equity and fairness.
Federal IT leaders are not only considering how to develop trustworthy artificial intelligence (AI) and machine learning (ML) tools, but also how to leverage these capabilities to prevent bias and inequity across IT systems and decision-making processes.
AI can expose shortfalls across organizations developing, modeling and integrating AI tools if they lack quality data or if there are gaps in training algorithmic models, according to Defense Logistics Agency AI Strategic Officer Jesse Rowlands.
AI can also help organizations identify where to strengthen fairness and equity practices, he said during an ATARC event Tuesday.
“If you’re doing smart AI, you’re doing ethical checks, you can bring forward a lot of problems you may have in your systems just by utilizing AI already,” Rowlands said. “A lot of ethical problems aren’t nefarious. They’re a lot more from ignorance, like we didn’t know that this was happening. If we’d done a statistical study or an AI study, we would have known it, but when you bring in the technology, you start exposing those weaknesses if you’re doing it.”
Department of Commerce Chief Data Scientist Chakib Chraibi agreed, adding that AI is powerful technology and a tool that can potentially fill gaps in human biases as long as the right policies and frameworks are in place.
“We know that data is one of the major issues that affect fairness and equity in the models and affect responsible AI,” Chraibi said. “On the other hand, I want to make sure that we all understand that we should see AI as an opportunity, because we have to look at when we use responsible AI, we compare it to what our current situation is.”
Chraibi said there are currently equity shortfalls and barriers across different business functions, and while AI can exacerbate and amplify some of those issues, it can also promote and achieve equity and fairness.
One of the ways Commerce is trying to drive AI in a trustworthy direction is through the development of its own AI ethics framework.
“We tried to make the framework easy to access and not prescriptive, basically just tried to raise awareness about the issue,” Chraibi said. “This is just the first step. … I think one major aspect is, of course, to develop stronger data culture within the federal government. We need to better understand what data are, how to use them, how to access them, and how to make sure that the data are the right quality.”
Many federal AI leaders are focusing on ethics use cases in areas where human judgement is involved, because they are often susceptible to human biases. These include human resources, contractor selection, bidding, buying, finance and more.
The Treasury Department’s Office of Compliance and Commuinity Affairs Deputy Comptroller for Compliance Risk Donna Murphy said her organization released a request for information (ROI) last year to examine how financial institutions use AI to help inform risk management and ethics around the technology. Her team is pushing to keep AI oversight in check so that organizations can apply it responsibly to promote equity in their activities.
Murphy said her agency, as well as the Federal Housing Finance Agency and Consumer Finance Protection Bureau, are working on automated valuation model rulemaking to eliminate potential discrimination in property valuations.
“One of the important questions here that the agencies are looking at is whether and how we can identify the bias that’s built into the underlying data and use the models, ensure that the models don’t continue to exacerbate that, and hopefully that they mitigate it and promote fairness,” Murphy said.
Some federal leaders also recognize that promoting data literacy and enhancing representation across the workforce — especially those who work with AI — is a critical step to promoting fairness and responsible solutions. Although proper data and frameworks are important, Defense Digital Service AI/ML & Cybersecurity Digital Service Expert Nelson Colón said workforce diversity is just as important.
“I’ve always found that when we bring people from all different types of backgrounds, we actually build better products — we actually build better solutions,” Colón said. “My teams in academia, when I worked in research on gerrymandering — how do we use data to improve, to build better, more fair maps? Our teams were very diverse, from people from policy, people from data science, people from math, people from social sciences, and our best work came from having different opinions and different perspectives in the same room.”
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Navy Chief Points to More Autonomous Systems, Robotics by 2027
Adm. Lisa Franchetti's new plan prioritizes development of autonomous systems to prepare the Navy for growing aggression from China.
5m read -
GSA Taps Dovarius Peoples as Deputy CIO
Peoples previously served as CIO of the U.S. Army Corps of Engineers and oversaw the service's cloud migration and data modernization.
1m read -
DOD's New Acquisition Plan Will Streamline How it Buys, Scales AI
Open DAGIR is a modular ecosystem enabling procurement for different components that can be integrated separately.
5m read -
AI Fundamentals Bootcamp
Join us for an informative workshop for federal technologists interested in exploring artificial intelligence in the public sector. This bootcamp will help you learn how AI can boost government services and operations.
IBM Innovation Studio | 600 14th St NW, Washington DC, 2nd Floor