Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.


The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Remarks of Assistant Secretary Alan Davidson on Open Artificial Intelligence Models

Open Artificial Intelligence Models
Center for Democracy & Technology
Remarks of Alan Davidson
Assistant Secretary of Commerce for Communications and Information
National Telecommunications and Information Administration
Washington, D.C.

December 13, 2023
As prepared for delivery

Good afternoon, and thank you to Alex Givens and the Center for Democracy and Technology, for hosting today’s session.  

We are here today because of the growing power of technology in our daily lives.  

Advances in AI and machine learning systems have captured the public’s imagination, and rightly so. These new systems will impact nearly every corner of our economy. 

Responsible AI innovation can – and will – bring enormous benefits to people. But we will only realize the promise of AI if we also address the serious risks it raises today. 

Those include concerns about safety, security, privacy, discrimination and bias. Risks from disinformation. Impact on the labor market. 

For these reasons and more, there is a strong sense of urgency across the Biden Administration – and among governments around the world – to engage on these issues. 

President Biden’s AI Executive Order, the most significant government action to date on AI, brings the full capabilities of the U.S. government to bear. And the Commerce Department is playing a leading role in the Administration’s AI work.  

The Department is leading efforts on safety, security, innovation, competition, privacy, equity, and IP-related concerns. Our colleagues at NIST are standing up a new AI Safety Institute. The Patent and Trademark Office is exploring watermarking and provenance issues. The International Trade Administration is promoting trade and US companies.  

And at NTIA, we are doing our part. One area we will focus on, at the direction of the EO, is AI openness – in particular, the benefits and risks posed by widely available model weights.  

This includes model weights that have been “open-sourced” or otherwise broadly distributed. Note my use of air quote around “open source” – we know that what people commonly call “open source” AI is very different from what we mean in the context of open source software. In fact, part of our collective homework is to bring better precision to this conversation.  

AI openness raises important questions around safety challenges, and opportunities for more competition and innovation. 

Like open source software, early conversations about open source AI have engendered fears about safety and misuse.  

Just a few weeks ago I heard a prominent venture capitalist argue, in reference to open AI models, that ‘You don’t open source the Manhattan Project.’ That may be an extreme statement, but there are clear concerns about the power of AI and the dangers of making the most advanced frontier models widely available without any restrictions or safeguards against misuse. 

On the other hand, we've heard from people concerned about the impact on competition and innovation if only a small set of players control access to the most important models.  

History has also shown that closed systems can undermine experimentation and prevent technological advances. And we know that technology’s benefits can be distributed more widely when access to that technology is democratized.  

This does not need to be an either-or debate. We can seek polices that promote both safety and allow for broad access.  

To do so, we need your help. 

That is why I am pleased to join all of you today as NTIA kicks off public engagement in our review of AI openness. This review will lead to policy recommendations that seek to maximize the value of open source AI tools while minimizing the harms. 

We are eager to hear from you: Where should we focus our work? What are the important questions to answer?  

We are keenly interested in the pragmatism of any approach. We want this work to be grounded in the technical, economic, and legal realities of how AI is developing and being deployed. 

Today’s consultation is just the start. We anticipate more opportunities to gather public input coming shortly in the new year.  

We face a daunting challenge, but I know with your help, we will meet this moment. 

The potential for technology to promote human progress has never been greater. Making the right decisions now will lead us to a world where technology works in service of a more open, free, equitable and just society.  

Together, I know we can build that better version of our future. Thank you.