Skip to Main Content

AI Breakthroughs Demand Health Agencies Push New Workforce Strategies

Officials say health care organizations can adopt artificial intelligence with the right culture, data strategy and training.

5m read
Written by:
FDA CDRH Chief Medical Officer for Digital Health Center of Excellence Dr. Matthew Diamond speaks during the Health IT Summit in Bethesda, Maryland, Sept. 19, 2024.
FDA CDRH Chief Medical Officer for Digital Health Center of Excellence Dr. Matthew Diamond speaks during the Health IT Summit in Bethesda, Maryland, Sept. 19, 2024. Photo Credit: Capitol Events Photography

Artificial intelligence adoption in health care requires a workforce strategy to hire, empower and educate staff, top government health IT leaders said at GovCIO Media & Research’s Health IT Summit in Bethesda, Maryland, on Thursday. 

“There’s a pretty big mismatch between the experts in AI and machine learning, mostly coming from the computer science world and the people that we’ve historically funded,” said National Institutes of Health Assistant Director for Catalytic Data Resource Chris Kinsinger. “[They are] usually PhDs and medical doctors and with a lot of familiarity with biology and medicine.” 

Education is critical for everyone involved in AI-enabled medical devices at the Food and Drug Administration’s Center for Devices and Radiological Health. The center’s Chief Medical Officer for Digital Health Center of Excellence Dr. Matthew Diamond explained that education is key to better health outcomes, something the FDA is encouraging.  

“Transparency is so important for users of artificial intelligence-enabled medical devices, or patients, broadly,” said Diamond. “We’ve put out a number of papers on the role of transparency in health and workshops understanding how a device is intended to be used and how it’s not intended to be used.” 

Transparency can help agencies and health care organizations demystify emerging technology and its health applications, said Tony Boese, Department of Veterans Affairs’ National Artificial Intelligence Institute research programs manager and associate researcher. Leaders need to explain the risks and benefits, he said.

“Education is going to be very important to give people that sort of understanding in a better sense that it’s not Skynet and it’s not Terminator,” said Boese. “It can do some stuff for you, and some of the stuff that does, it does well and other things poorly. [Organizations need to give] people that clarity to show that it’s not magic or superior or strange, it’s just a slightly new type of computation.” 

Workforce Burnout

Burnout is a concern in health care, ServiceNow Field CNIO and Healthcare Enterprise Architect Erin Smithouser said, and AI can help reduce some of the burdens on clinical staff. AI digital assistants, she suggested, can be a way to reduce burnout, improve patient care and educate clinicians on AI use cases. 

“All the research shows that all they want to do is take care of the patients. What if we could augment their responsibilities there, lift the clinical burden of administrative tasks and operational challenges?” said Smithhouser. “What if this could be a true digital assistant to the clinicians and earn their trust that way?”

Organizations can create environments to expose the workforce to AI, which can help introduce staff members more comfortably the technology, Boese added.  

“Nice, controlled sandbox areas [are great] for people to go mess around and see what happens, and again, see that it’s not so strange and foreign, and that it’s not magic, and they can do it, and they can use it,” said Boese. “They can understand it better. Those sorts of things, I think, are the only way to really cultivate a good and healthy AI culture in an entity like an agency or company.” 

Challenges of Implementation 

AI systems in health care requires a lot of data. Patient data is sometimes constricted by privacy laws, making data less ready for AI applications in health care than in other settings. Agencies need to invest in the correct data strategy for AI systems before jumping in to train models on every data set, Kinsinger said. 

“We have a lot of data, but how do we make that data AI-ready? And there’s a debate about that: Is it to really make pristine, expert, curated data sets and invest there?” Kinsinger said. “Other people are telling me, ‘Just make the data available. We can train it. AI can do anything and we can do it that way.’” 

Diamond noted that the correct data models can be invaluable to health care organizations, as long as they adhere to health care privacy laws and ethics. 

“[There is] movement towards a more of a federated model, where data is not necessarily transferred or shared, but you have a platform where you can train a model at local sites, where the data stays on site, and all of that training contributes to a central model,” said Diamond. “Equally as important for federated testing or validation as well is to ensure that a model is performing within a diverse set of conditions, and to do that without having to actually transfer the patient data.” 

“I think ARPA-H has some interesting initiatives to promote those types of platforms,” he added. 

At NIH, researchers need data to be useful and adhere to the privacy laws, Kinsinger said. Agency leaders are still formulating rules on using certain data sets to train AI models, he said. 

“We have this data sharing management policy where you’re supposed to share all of your publicly funded research data, but, and so, you know, within limits, there’s obviously privacy limitations for some clinical data,” said Kinsinger. “If you’re training a model on the clinical data, our policy hasn’t dictated what you’re supposed to do with that model.” 

Picking the Right Spots 

Organizations using AI need to keep a human in the loop, the leaders noted, to guarantee AI systems are bringing the best outcomes for patients and research. Kinsinger noted that AI can process large sets of data in ways that aid researchers at NIH. 

“In the era of big data, where we’ve got lots of genomics data and  spatial transcriptomics data. A lot of data sets with just more data points than the human mind can handle, I think AI can be really useful tool in those settings,” Kinsinger said. “We’re seeing a lot of enthusiasm there.” 

VA facilities are spread throughout the U.S., Boese said, so implementation requires both agency leadership decisions and individual facility decisions, which can be a challenge. The agency prioritizes the ethical, responsible and trustworthy applications of AI, Boese said, and has been doing so for years. 

“When it comes to trying to make a big enterprise-wide change in a place like that is quite so geographically and mission diverse as the VA, it can obviously be a huge challenge,” said Boese. “We were one of the earlier departments to engage with trustworthy AI…We take very seriously there are strong social elements. We involve our ethics infrastructure consistently, which I, as an ethicist, I’m super happy that we do.” 

Ultimately, health organizations – in government and outside of it – need to find the right uses of AI systems, Kinsinger said. The new type of computation can help clinicians, but they will have to ensure diligent and ethical use in the right contexts, he said. 

“Everybody wants to apply AI. I’m not sure that it’s always the right solution for every data analysis challenge at this point, I think a simple regression could still solve a lot of problems, as well,” said Kinsinger. 

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe