Skip to Main Content Subscribe

NIST Report Urges Measurable Outcomes for AI Standards

Share

AI standards often lack clear metrics, according to a new NIST report that proposes a theory of change model to guide adoption and evaluation.

4m read
Written by:
Entrance of the Gaithersburg Campus of National Institute of Standards and Technology, Jan. 30, 2021.
Entrance of the Gaithersburg Campus of National Institute of Standards and Technology, Jan. 30, 2021. Photo Credit: NIST

A new National Institutes of Standards and Technology (NIST) report proposes using a “theory of change” model to boost AI standards adoption, evaluate their effectiveness and inform future development. 

“Notable goals of AI standards, particularly with respect to AI data, performance, and governance, are to promote innovation and competition, minimize harm, and promote public trust in systems that use AI in a manner consistent with the United States’ private sector-led approach for developing and applying standards. The intent of this report is to sketch a possible approach for evaluating whether a given AI standard or set of standards meet these goals,” the report reads.

The report notes that AI standards are often developed without a formal way to measure adoption or assess whether they drive innovation or build trust.

Julia Lane, a NIST associate and the report’s author, proposes using an economic framework known as the theory of change, which relies on continuous feedback from community stakeholders to determine what is and is not working. Standards that address the practical needs of the AI community are more likely to be adopted than those created without clear use cases, Lane told GovCIO Media & Research in an interview.

“If you just ask for standards, you’re going to get a lot of standards, but that doesn’t mean they’ll be adopted,” Lane said. “What I was trying to get people to think about is what kind of standards are likely to be adopted, and how do you engage the community to adopt them?”

Lane said federal leaders, industry and academic institutions are all key partners in measuring the impact of AI standards. Together, they can identify where standards fall short and update them as AI tools and use cases evolve.

“The basic idea here is to figure out what’s going wrong, measure it and iterate,” Lane said. “Think about what adoption will look like, identify the goals, propose standards, measure them and evaluate whether they’re working.”

She added that while standards can inhibit innovation, they can also “tremendously transform” how innovation occurs. Allowing for experimentation and trial and error, she said, could reshape how AI standards are developed and implemented.

“I think the call for standards is the right one, but we need the ability to experiment and find out what works,” Lane said. “The federal government, in conjunction with its partners, can look holistically at different outcomes and establish standards that help things move much more quickly.”

Turning AI Standards Into Measurable Outcomes

Federal agencies are increasingly turning to AI to support complex tasks, such as integrating and analyzing thousands of datasets. Lane pointed to the criminal justice system as one area where the federal government could demonstrate the measurable impact of AI standards.

The report highlights how AI standards could improve data integration across criminal justice systems and reduce societal and financial costs associated with crime. Lane said AI applications guided by clear standards could link records across agencies to track individuals throughout the criminal justice system.

“It’s city, state and federal data, and there are tremendous challenges with linking those assets across different jurisdictions,” Lane said. “But the federal government could certainly be aided by promulgating standards around linkage decision-making.”

Standards focused on error measurement, she said, could increase transparency around potential data integration mistakes. Standards addressing explainability and interpretability could also help ensure users trust AI systems and their outputs.

“I hope people will read this and take the initiative to identify those areas where standards are most needed,” said Lane. 

NIST is seeking public feedback on its report and intends to announce an online event to encourage dialogue later this year. 

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe