Effective AI Requires Constant Monitoring, AI Leaders Say
Refining and reviewing data models is essential for impactful application of artificial intelligence.
Federal data leaders expounded on their testing strategies for artificial intelligence (AI) models to ensure accuracy and relevancy at GovCIO Media & Research’s AI Gov: Data event Thursday.
AI models are generally only as effective as the data used to uphold them. In cases where data is not continually refreshed, or its limitations unaddressed, conclusions may be inaccurate. Sometimes even if the data is accurate and current, AI models can fail to account for discrepancies or variations in data and produce flawed results.
The Department of Veterans Affairs has made recent investments in AI and uses advanced modeling for public health and biomedical research. Data leadership overseeing these initiatives have become increasingly attuned to the need to review their models for biases, and noted the potential limitations of a particular data set to ensure the results are interpreted and applied correctly.
“You really do need to test models – systematically, strategically, over and over again, with different sub populations and even with different VA hospitals,” said Amanda Purnell, VA Director of Data and Analytics Innovation, at the event. “So there is no one model for the VA. That wouldn’t work. Because people are different by region, different by area, different by health condition. So we have to continuously test and retest models. And we have the computer power to do this incredible work with reconfiguring, readjusting, developing specialized and tailored models for different groups of people.”
Other health and science-focused agencies have worked to apply similar review standards to their AI projects, including the Centers for Disease Control and Prevention’s (CDC) efforts to model the spread of COVID-19. This requires large-scale recalibration when new variants emerge to ensure projections account for the virulence and pathology of new strains.
“There is no single model that is applied. We apply a model which is usually trained on a specific set of data to address a specific question,” said Fred Streitz, Senior Advisor at the CDC Center for Forecasting and Outbreak Analysis, at Thursday’s event. “And those models are recalculated constantly, and the turnaround on that is hours or days. So as new variants emerge, we respond to that very quickly and retrain models as necessary to address specific questions. But there is no there is no one model to rule them all. There is no one forecast that’s absolutely correct. Everything is in the context in which it was built and the information that went into the model.”
The unifying process behind all these initiatives is ongoing review of data inputs, which helps ensure the integrity of AI models’ conclusions.
“This is an iterative process and models get built and used by individuals, but there’s this continuous monitoring of the models that also needs to occur,” said David Keever, Vice President and Division Chief Scientist at the Leidos Innovations Center, at Thursday’s event.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Looking Back at the First Trump Administration's Tech Priorities
In his first term, Donald Trump supported cybersecurity, space policy and artificial intelligence development.
4m read -
Labor CAIO Outlines Responsible and Ethical AI Priorities, Use Cases
Department of Labor Chief AI Officer Mangala Kuppa outlined how her role is shaping the agency’s artificial intelligence strategy.
20m watch -
Elevating Cybersecurity in the Intelligence Community
The Intelligence Community is developing strategies to protect data and strengthen resiliency against emerging cyber threats.
30m watch -
Trump Defense Pick Wants Faster Tech Procurement
Pete Hegseth, a combat veteran, has indicated his desire to streamline technology contracting and tackle China's threat.
4m read