Health Leaders Weigh AI’s Impact on Health Equity
The federal executive order on trustworthy AI spurs agency goals to develop much-needed trustworthy systems in health care.
Federal leaders are strategizing on how to develop systems using AI that are critical to curbing health care disparities as an executive order puts more emphasis on doing so responsibly.
Health care disparities in the U.S. can impact health coverage, mental health issues, chronic health conditions and mortality rates. Demographic factors like socioeconomic status, gender, sexual orientation, age, disability status and race shape individuals’ ability to receive health care.
President Biden’s executive order on artificial intelligence signed Monday makes it an imperative for agencies to develop AI responsibly and equitably. The order directs agencies to pay closer attention to data algorithms to mitigate harm on workers and the public.
For many federal leaders, curbing health disparities and designing technology for this goal starts with the data. It’s a major initiative within the Department of Health and Human Services through efforts like Healthy People 2030 and the U.S. Core Data for Interoperability (USCDI) standard.
“We are really thinking about health equity as a core design principle,” said National Coordinator for Health IT Dr. Micky Tripathi during a Congressional briefing last week with HIMSS. “One is starting with the data itself. … You’ve got to have that data available in order to be able to identify where there might be communities that are getting different types of care.”
Ethical AI in Health Care
Officials see great promise in technology — particularly AI — as a driving factor in achieving equity in the health care ecosystem when developed responsibly.
“If we get it right, [it has] tremendous potential to close gaps and problems that we have never been able to do before,” said Greystone Group CEO Dr. Chris Gibbons at the briefing.
Monday’s executive order brings the safety of AI-based tools to the forefront of priorities, as experts warn of data security and privacy issues. Through the order, HHS is tasked with creating a first-of-its-kind program to evaluate harmful health care practices that involve AI and build educational tools to combat threats.
ONC is in the final stages of a proposed rule that would empower health care providers and ensure more transparency over algorithms used in clinicial decision support.
But the agency is not “trying to regulate AI.”
“We’re not trying to get into the business of saying that’s a good AI tool, that’s a bad AI tool,” said Tripathi. “We believe very strongly that there’s net upside potential here from an integrity perspective, and for patients at large, but also from an integrity perspective if the use of these tools are used responsibly.”
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
AI Use Must Align with Organizational Strategy, VA AI Lead Says
VA is piloting new AI tools to tackle existing priorities, innovate with a purpose and increase literacy before taking next steps.
5m read -
How Agencies Are Tackling AI for Health Care
Federal leaders are researching AI capabilities to support health care-focused issues like clinician burnout and health equity.
4m read -
VA Testing Low-Cost Innovation for Health Care Solutions
The agency’s Innovation Unit uses low- to no-cost pilots to test new health care solutions before they reach veterans at home.
17m listen -
VA Rolls Out Tele-emergency Care Program Nationwide
The Department of Veterans Affairs announced the roll out of its tele-emergency care program nationwide this week.
4m read