Making Sense of AI Solutions for Health Care
As agencies integrate more artificial intelligence into their operations, managing directives, policies and best practices into the process become critical.

Artificial intelligence is poised to impact many aspects of society, and policymakers are keen to make sure federal agencies are integrating the technology safely and ethically. Technologists need to make sense of these new directives and compliance requirements to ethically harness the power of AI.
In health care, the possibilities in applying AI are significant. AI can assist in areas like detecting breast cancer and other health services. The Substance Abuse and Mental Health Services Administration (SAMHSA) is looking to implement a chatbot to act as a virtual assistant helping patients find answers around mental health and substance use addiction crises.
Health leaders are also using AI to tackle fraud, waste and abuse.
“We use artificial intelligence and machine learning to find potential fraud that would not be apparent to the human eye. We try to use the latest technology to make potential fraud easier to detect more quickly,” a spokesperson from the Centers for Medicare and Medicaid told GovCIO Media & Research.
The Department of Veteran Affairs sees AI as the “next frontier” of health care.
“There are new possibilities [AI] is going to open for health IT, where AI may have its own ideas that come up, and we’ll engage the people we’re talking with,” VA AI Chief Gil Alterovitz told GovCIO Media & Research in an interview last year.
Putting AI Policies Into Practice
There are various frameworks and directives guiding the use of AI, including the White House’s AI Bill of Rights, the National Institute of Standards and Technology (NIST)’s voluntary AI Risk Management Framework and the Defense Department’s ethical AI principles.
These directives represent the prioritized solutions chosen by Leidos. To ensure compliance with prevailing regulations, laws and policies, the company employs an internal framework that enables its team to remain informed and up to date. This framework also facilitates the development of AI solutions that uphold both safety and efficacy standards.
“There is an internal framework that we use to make sure that it captures the current regulations and laws and policies. And it’s a framework that gets enhanced as things change, but it’s a framework that we all adhere to when we are developing our AI and machine-learning solutions,” Narasa Susarla, solution architect in Leidos’ Health Group, told GovCIO Media & Research.
To deliver the right solution, Susarla described combining technology delivered in a framework called FAIRS with the Leidos 4A methodology, which begins with analysis, assistance and augmentation, progressing up through automation.
“This is basically a methodology for us to gradually introduce and increase the level of AI capability while building human trust and reducing error,” Ning Yu, chief NLP research scientist and technical fellow in Leidos’ AI/ML Accelerator, told GovCIO Media & Research. “We don’t want to jump into automation directly, we want to be able to really understand the potential data bias, human bias as well as gradually build trust when working with humans.”
Guided by this framework, Leidos has successfully developed and deployed large language model (LLM) applications in the health domain and will continue to develop more initiatives for responsive adaptation of the newer models. One initiative is to develop specific ethical assessments to help assess and mitigate risks throughout the lifecycle of generalized solutions, Yu said.
“When it comes to integrating generative AI, first and foremost we want to make sure we are still developing secure and responsible solutions with these new tools,” Yu said.
One generative AI project looks at how it can help the patient experience around filling in claims forms more quickly.
“Generative AI can be used to assist medical providers filling out medical forms by pre-filling the forms based on hundreds or thousands of pages of patient’s medical record,” Yu added. “It can also help the providers diagnose, take notes, assist patient-doctor communication and also train staff.”
Partnerships, especially with the human, are essential to putting those advantages into action and creating well-rounded solutions.
“Most of it is really co-joined development activities,” Susarla said. “We’re trying to look at all kinds of innovative solutions and some of these partners are helping us figure those out. Additionally, we are also focusing on enhancing image-processing capabilities and exploring various audio aspects related to communication.”
“We add the human into the workflow loop, especially in health because we are looking to develop AI that can support clinicians and lead to better care outcomes, improve productivity and efficiency of the care delivery,” Susarla added.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
White House Science Chief: US-Driven AI Sets Global Standards
Michael Kratsios outlined how American AI technology on the global stage will help standardize the tech and counter China’s influence.
5m read -
Modernizing Critical Infrastructure in the Face of Global Threats
Officials are expanding the latest strategies in boosting defense infrastructure, including securing satellite communications, upgrading enterprise-wide technology, optimizing data management.
20m watch -
Trump AI Orders Call for Speed in Building Infrastructure
The directives call for expanding AI infrastructure, streamlining federal permitting and promoting AI exports.
4m read -
DOD Accelerates Software Modernization with Agile DevSecOps Push
The Pentagon's software implementation plan tackles cultural hurdles and integrates security early to deliver critical capabilities faster.
6m read -
White House Unveils AI Action Plan to Secure Global Dominance
The strategy outlines steps to accelerate private sector innovation, build critical infrastructure and advance U.S. leadership in AI policy and security.
3m read -
VA's Platform One Powers Rapid Innovation to Bolster Digital Services
VA's Platform One accelerates software development timelines from weeks to hours, ultimately enhancing digital services for veterans.
5m read -
Opinion: Original Intelligence Is the Missing Piece for AI Transformation
Limitations of AI agents and development drive growing needs for workforce development and "original intelligence."
3m read -
VA CIO Targets Modern IT and Smarter Workforce Alignment
Agency leaders told lawmakers they are focused on trimming legacy systems and restructuring its workforce to streamline operations.
3m read -
Pentagon's $200M AI Contracts Signal Broader Effort to Transform Talent
The Army is leveraging Silicon Valley, reservist programs and new hiring strategies to integrate critical digital skills in its ranks.
5m read -
AI Foundations Driving Government Efficiency
Federal agencies are modernizing systems, managing risk and building trust to scale responsible AI and drive government efficiency.
43m watch -
Agencies Tackle Infrastructure Challenges to Drive AI Adoption
Federal agencies are rethinking data strategies and IT modernization to drive mission impact and operational efficiency as new presidential directives guide next steps.
5m read Partner Content -
Generative AI Demands Federal Workforce Readiness, Officials Say
NASA and DOI outline new generative AI use cases and stress that successful AI adoption depends on strong change management.
6m read