Skip to Main Content

AI Chiefs Need to Experiment to Implement AI Successfully

DOD AI leaders are tackling responsible AI practices and leadership as agencies appoint AI chiefs under a recent executive order.

7m read
Written by:
AI Chiefs Need to Experiment to Implement AI Successfully
Photo Credit: APChanel/Shutterstock

Taking risks is critical to success for the chief artificial intelligence officer, according to government and commercial AI leaders who met at the inaugural Chief AI Officer (CAIO) Summit in Boston last week.

Eileen Vidrine, who was appointed the Air Force’s chief data and AI officer this year, said at the event that when it comes to successfully integrating AI into an agency, “it really comes from being innovation-driven. You have to start small and scale. You have to be willing to experiment.”

Although artificial intelligence development itself is nothing new, the role of the CAIO is, particularly in the federal government. These leaders will be instrumental in shaping agency strategy and application of AI in the wake of the October White House executive order that directs federal agencies to designate a CAIO within 60 days, among other actions.

Vidrine said the Air Force’s AI efforts so far included creating innovation hubs, accelerators with universities like MIT and a proving ground at Elgin Air Force Base in Florida.

“It’s not just about sending research money to an institution. We actually have a dozen airmen and guardians working side by side with these researchers, but then you have to scale it out,” Vidrine said.

Initiatives like turning major challenges into games have brought together experts and non-experts to tackle manual processes and find fast and efficient ways to automate them, she added. These challenges have brought in allied partners, small businesses and academic teams into spaces they would otherwise not occupy. These partnerships help DOD operate more efficiently.

David Barnes, who serves as the Army’s chief AI ethics officer at the Army AI Integration Center (AI2C), said at the event that the organization is focused on understanding AI’s quantitative and qualitative risks, educating and developing a workforce that understands the potential benefits and risks of AI, producing partnerships between the government, academia and industry, as well as coordinating and scaling the Army’s use of AI.

Notable is Barnes’ comment that DOD does not want to “establish another set of compliance gates.”

“We don’t want to find another set of eyes necessarily looking over our shoulders,” when it comes to developing AI strategy, he added. But the challenge becomes “changing attitudes about that ‘it’s going to slow me down.’ It’s also taking those principles, taking this risk management framework and translating that down to the MO ops team so that the individual knows what their role is, ensuring that the organization is developing trustworthy AI and aligned with the business strategy.”

Vidrine added that using the hidden talents of her workforce has helped the department address responsible AI practices and guidelines within it.

“You have to build your champions. When you let your champion shine, the velocity that you can get moving forward is pretty phenomenal,” Vidrine said. “I think everybody in this room has great capability in your workforce that you just don’t know about yet because they’re doing one thing. So a little [natural language processing] on some HR records can actually create amazing insight.”

Related Content
Woman typing at computer

Stay in the know

Subscribe now to receive our curated newsletters

Subscribe