VA’s New AI Strategy Targets Ethics, Trust
The agency adopted a new artificial intelligence policy focused on building out existing capacities while fostering veteran trust.
The Department of Veterans Affairs is the latest agency to release and implement a strategy around ethical use of artificial intelligence for enhanced veterans care.
Implemented in September 2021, the plan centers on four distinct objectives:
- Use existing AI capacities to better deliver health care and benefits to veterans
- Develop these existing AI capacities
- Increase veteran and stakeholder trust in AI
- Build upon partnerships with industry and other government agencies.
The plan represents the agency’s first ever formally published AI strategy since the founding of the National Artificial Intelligence Institute (NAII) in June 2019. The agency established the institute with support from the February 2019 American AI Initiative executive order, which significantly increased funding for federal artificial intelligence research while establishing new AI institutions across government.
The ethics strategy aims to ensure transparency and trust around AI capacities.
“VA understands the significance of creating a balance between innovation, safety and trust,” said NAII Director Gil Alterovitz according to an agency press release. “To this end, VA leadership, practitioners and relevant end users will be trained to ensure all AI-related activities and processes are ethical, legal and meet or exceed standards … VA’s new roadmap will help realize AI’s full potential building trust in future technology and creating more effective, efficient systems for patients.”
One of the core priorities of VA’s artificial intelligence program since its founding has been to draw expertise from private industry and transform VA into an AI learning center. The VA’s new AI strategy has indicated plans to expand upon these partnerships with private industry.
“The VA is already collaborating with other federal agencies on research and data sharing and overseeing AI technology sprints that bring industry partners to the table with specified objectives so that their participation creates a win-win opportunity,” according to the strategy. “We will seek to build on these efforts and identify new approaches to collaboration that will accelerate the rate of knowledge discovery.”
While VA has rapidly developed its AI capabilities since 2019, agency leadership has expressed particular attention to ensuring these capacities are developed in ways that protect veteran trust.
This falls under the purview of what Alterovitz refers to as “trustworthy AI” that abides by NIST principles while expanding these to cover the specificities of VA’s AI program.
The Government Accountability Office’s (GAO) released a similar strategy. Its “AI Accountability Framework” is designed to prevent AI models and their applications from being designed in an unduly flawed or ethically compromised manner. GAO Chief Data Scientist Taka Ariga noted this is a vital concern in large part because these issues can become baked into the foundation of AI applications, allowing these issues to persist and even extend themselves assuming they are unintentionally built into the models themselves.
The Department of Labor, for example, encounters systemic challenges within data sets that were compiled decades ago.
“There are data sets we use today that were developed in the 60s that had women tagged as homemakers when in fact they were teachers, or scientists, or lawyers,” Kathy McNeill, who leads emerging technology strategy at the agency, said at a virtual event earlier this year.
Similar to VA, other health-focused agencies have paid a special attention to concern around privacy. Health care data used within AI models need safeguards to ensure personally identifying information is protected or obscured, a process that the National Institutes of Health (NIH) has codified under the oversight of its ACD Working Group on Artificial Intelligence.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Navy Chief Points to More Autonomous Systems, Robotics by 2027
Adm. Lisa Franchetti's new plan prioritizes development of autonomous systems to prepare the Navy for growing aggression from China.
5m read -
GSA Taps Dovarius Peoples as Deputy CIO
Peoples previously served as CIO of the U.S. Army Corps of Engineers and oversaw the service's cloud migration and data modernization.
1m read -
DOD's New Acquisition Plan Will Streamline How it Buys, Scales AI
Open DAGIR is a modular ecosystem enabling procurement for different components that can be integrated separately.
5m read -
AI Fundamentals Bootcamp
Join us for an informative workshop for federal technologists interested in exploring artificial intelligence in the public sector. This bootcamp will help you learn how AI can boost government services and operations.
IBM Innovation Studio | 600 14th St NW, Washington DC, 2nd Floor