Skip to Main Content

Responsible AI Development Requires Risk Management

Officials emphasize the importance of addressing bias, data quality and civil liberties in AI implementation.

4m read
Written by:
DHS Chief Technology Officer David Larrimore speaks at the GovCIO Media & Research AI Summit in Tysons, Va. on Nov. 7.
DHS Chief Technology Officer David Larrimore speaks at the GovCIO Media & Research AI Summit in Tysons, Va. on Nov. 7. Photo Credit: Capitol Events Photography

Federal agencies are focusing on risk management to develop and implement responsible artificial intelligence, federal IT officials said Thursday at the inaugural GovCIO Media & Research AI Summit in Tysons, Va. Leaders must guide bias, data quality and civil liberties, according to Martin Stanley, AI and cybersecurity researcher at the National Institute of Standards and Technology’s (NIST) AI Innovation Lab said.

“What we need for responsible AI is actually measurement of risks, impacts and harms that which are in this entirely different ball game [than performance metrics],” said Stanley.

The Department of Homeland Security (DHS) plays a leading role in responsible AI development across government, DHS Chief Technology Officer David Larrimore said. When the agency first began its AI Task Force, he said, the agency put in rules for ethical and responsible deployment, including bringing together agency stakeholders.

“We made this really strong statement that the Office of Civil Rights and civil liberties have a really important place around AI governance,” said Larrimore. “Since that time, partnership has increased. We’ve brought in the office of privacy. We have brought in [the Office of the General Counsel]. We brought in policy and essentially everything we do from a big picture perspective.”

NIST’s AI Risk Management Framework manages risks and benefits of innovative AI systems, as outlined in the Office of Management and Budget’s 2023 M-24-10 memo. NIST, Stanley said, is working with the chief artificial intelligence officers council’s risk management working group to better adapt the framework.

“We are identifying resources and tools that can assist agencies. We’re trying to help them with the adaptation of the AI Risk Management Framework that’s flexible, adaptable, risk based, rates preserving and voluntary, of course, but we’re trying to help that process happen at federal agencies,” said Stanley.

M-24-10 directs agencies to “seize the opportunities AI presents while managing its risks” but isn’t “very specific on how we measure those things,” according to Veterans Affairs Office of the Chief Technology Officer Artificial Intelligence Product Lead Dr. Kaeli Yuen. She said that the agency evaluates risk in the systems already in place at the agency.

“We put out a call for information about AI use cases so that we could populate our 2024 inventory, and we got over 600 responses,” said Yuen. “The challenge is really educating all of those AI use case owners as to how to evaluate the AI impact, the risks and mitigate those risks.”

VA doctors are using AI scribes, Yuen said, to augment their clinical work, but human oversight is critical to managing risk with the technology. The doctors have to ensure that clinical decisions are made with human judgment and empathy.

“It listens to your clinical encounter and writes the doctor’s note for them. However, the note doesn’t just go into your record as part of your record, it would be there as a draft and the physician would then have to edit it before being able to sign it and officially make it part of the record.”

Red Hat Chief Architect Adam Clater added that agencies can implement AI willfully to minimize risk.

“It just speaks to the need to just be very intentional,” said Clater. “As you bring each piece of AI into your mission arena, you have to be very intentional about evaluating what it’s doing and how it’s doing it.”

Ultimately, risk management looks different for different agencies and different use cases, Yuen said.

“Each use case is so different,” said Yuen. “Each use case has its own specific questions that are important to answer, the high-level topics that M-24-10 wants us to think about.”

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe