HHS Leaders Call for Stronger Data Practices to Bolster AI, Cybersecurity
Officials emphasize data lifecycle awareness, stakeholder engagement and safe test environments to advance federal AI efforts.
Department of Health and Human Services leaders urged health agencies to understand their data — from why data is collected to how it is used — to improve cybersecurity, decision-making and AI outputs as the tech becomes increasing integrated into federal systems, officials said Tuesday at GovCIO Media & Research’s Health IT Summit in Rockville, Maryland.
Eileen Oni, chief data officer within the Office of the Director at the National Institutes of Health, said her agency’s data is publicly available to a large number of stakeholders. Oni emphasized that agencies must know who can access different datasets to identify and mitigate cyberattacks.
“Being cognizant of [the data management lifecycle] is probably one of the most important things, it’s not just siloed to cybersecurity professionals within your organization.”
Oni added that agencies must also engage with stakeholders to help them understand the metrics of data collection. Continuous feedback loops from partners help agencies maintain good data hygiene practices.
HHS OIG Assistant Inspector General for Cybersecurity and IT Audits Tamara Lilly said agencies must identify potential bias in datasets as soon as possible to maintain good data hygiene. She said understanding why and how data is collected helps identify unintentional bias, especially as agencies train AI and machine learning models on agency data.
“We tend to trust the data we have. We assume that it’s accurate — not that it can’t be accurate — but it depends on how it was collected,” said Lilly. “Through AI, we’re training the models with all of this data, not realizing that perhaps how we collected it … is not the intended use for the data in that particular AI model,” said Lilly.
As agencies leverage innovative tech like AI, CDW Healthcare Strategist Bryce Thompson said sandboxes are critical to effective data and AI governance. Sandboxes let teams test new tech, fail and learn in a safe environment without hurting patients or the agency. Thompson noted more sandboxes in federal health agencies could allow for rapid prototyping and increased agility.
“Because if it’s not safe to fail, you’re not going to try the right things,” said Thompson. “Or you’re going to optimize for the past … when you want to develop the future.”
Lilly emphasized that failures — in a safe and educational environment — are critical to the workforce embracing data management and adopting AI tools.
“It’s in the failures that we learn and grow and move towards success. The investment in training our people … and having that ability to meet the need of where we are today is another step we’re taking to an advanced way of processing data,” said Lilly.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Federal AI Enters a ‘Storming Phase’ Under the Genesis Mission
National labs are working to reduce complexity, connect compute and deploy assured autonomy to speed AI research and deployment.
3m read -
Agencies Turn to Industry to Scale AI, Develop Federal Workforce
Federal IT leaders say industry partnerships are critical to overcoming workforce and technical challenges in AI adoption.
3m read -
Introducing Technically Zen, an Upcoming Podcast on Wellness in Tech
A preview episode introducing Technically Zen, a new podcast exploring well-being, leadership and resilience for federal technologists.
16m listen -
Building Resilient AI Infrastructure
Officials from the Transportation Department, Government Accountability Office and CDW will discuss how agencies are navigating the transition from experimental AI to scalable, production-grade systems that deliver tangible ROI without requiring a "rip and replace" of existing legacy assets.
22m watch