Skip to Main Content

Why AI Needs Open Source, Competition Amid Regulation

Officials raise concerns about the risks of early regulation and express the need to foster fairness in markets for innovative technologies.

4m read
Written by:
Former Secretary of State Condoleezza Rice speaks at Stanford University about AI regulations.
Former Secretary of State Condoleezza Rice speaks at Stanford University about AI regulations May 30, 2024. Photo Credit: Stanford Institute for Economic Policy Research

Industry, academic and government officials underscored the critical need for open-source software and to ensure fair market access for startups while warning against the risks that excessive regulation could have in stifling artificial intelligence innovation and favoring large firms.

“To see this ecosystem evolve in the most pro-competitive way, we need to enable open source and figure out a way for small firms to safely comply with the same protocols as large firms,” said Consumer Financial Protection Bureau Deputy Chief Technologist for Law and Strategy Atur Desai. “I’m really enthusiastic to see the efforts by the U.S. government to bring in more researchers with different backgrounds to identify new remedies.”

Desai and other researchers and leaders spoke at a May 30 conference co-hosted by the Justice Department and the Stanford Institute for Economic Policy Research about anti-competitive practices, safe regulation and fostering innovation amid rapidly evolving AI priorities.

CEO of Landing AI and founder of DeepLearning.AI Andrew Ng emphasized that an essential aspect of the AI supply chain is protecting open source because it’s the only way to allow innovative startups to enter the market. 

“Some companies that would rather not compete with open source are lobbying government under the guise of ‘safety’ to pass regulations to stifle it,” Ng said. “If these regulations on AI are passed, such regulations will make it so much harder for almost any company and country to access cutting-edge AI technology.” 

Stanford’s Center for Research on Foundation Models is making foundational AI models more accessible and more transparent. Foundational models are trained on large amounts of data and can adapt to many applications.

According to the center’s Director Percy Liang, there’s been pivotal change in open source. She also raised concerns about the direction that AI development is moving, as the level of transparency has diminished. 

“In 2010s, there was a culture of researchers sharing data code models openly, and various companies could leverage this, so a lot of the innovations came from openness,” Liang said. “But what has happened in the last few years … capabilities go up, openness goes down.”

As AI continues to challenge societal and legal frameworks, the center is developing holistic evaluations of language models to look at the capabilities and risks. The center focuses on measuring accuracy, bias and fairness around AI to inform policymakers. 

United States Patent and Trademark Office (USPTO) officials are also trying to address concerns that smaller firms would lose out to big tech in the race for AI development, which is why the agency is advocating for policies that encourage AI innovation. 

“We need what’s best for the country, thinking about how everyone can win and how we can create an ecosystem that sets policy without disrupting the literary, music or production industry,” said USPTO Director Kathi Vidal. “It’s not just figuring out the legal response, it’s figuring out what we need to do to incentivize [and] to advance the country in terms of innovation policy.” 

Large firms have the potential to influence regulation processes to protect their own power, which is why it’s critical for policymakers to decide how to ensure that smaller firms have access to markets, said NIST Chief AI Advisor Elham Tabassi, who also serves as associate director for emerging technologies in NIST’s Information Technology Laboratory. She added that businesses must also remain cognizant about the benefits and the emerging consequences of AI models.

“AI is a tremendous opportunity to improve our lives, but there’s a lot we don’t know,” Tabassi said. “That’s why we must engage in efforts to advance our scientific understanding of AI models, address AI’s impact on people and society and enhance research on identifying and mitigating risks.” 

With a significant increase in smaller startups developing AI and an estimated $300 billion invested into over 25,000 AI startups in the past year, it is imperative for government to thoroughly examine markets and be mindful of anti-competitive behavior throughout the supply chain, noted Sen. Amy Klobuchar of Minnesota.  

“We must ensure the markets that are vital to the safe and responsible development of AI allow for the entry of new innovative startups so that they can compete,” said Klobuchar. “We should be fighting to ensure our markets remain fair, open and contestable, so by working together we can ensure that competition continues to drive innovation in America’s economy for generations to come.” 

Government should take care not to stifle artificial intelligence innovation with excessive regulation, Klobuchar said. 

“It concerns me to think about the rush to regulate [artificial intelligence] because we might want to live in this ecosystem for a while,” said former Secretary of State Condoleezza Rice. “Risks will emerge that do need regulation, but if you’re trying to prefigure a regulation for something that is evolving this quickly, you’re almost always going to make mistakes.” 

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.