Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Breadcrumb

  1. Home

FACT SHEET: NTIA AI Report Calls for Monitoring, But Not Mandating Restrictions of Open AI Models

July 30, 2024

Today, NTIA is issuing policy recommendations in support of making the key components of powerful AI models widely available. These “open-weight” models allow developers to build upon and adapt previous work, broadening AI tools’ availability to small companies, researchers, nonprofits, and individuals.  

The Report on Dual-Use Foundation Models with Widely Available Model Weights recommends the U.S. government actively monitor for potential risks to arise, but refrain from restricting the availability of open model weights for currently available systems.  

President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence directed NTIA to review the risks and benefits of large AI models with widely available weights and develop policy recommendations to maximize those benefits while mitigating the risks.  

The recommendations in NTIA’s report will promote innovation and access to this technology while positioning the U.S. government to quickly respond to risks that may arise from future models.  

To monitor for emerging risks, the report calls for the U.S. government to develop an ongoing program to collect evidence of risks and benefits, evaluate that evidence, and act on those evaluations, including possible restrictions on model weight availability, if warranted.  

Specifically, the report outlines a number of activities NTIA recommends the U.S. government take:

Collecting Evidence

  • Performing research into the safety of powerful and consequential AI models, as well as their downstream uses;
  • Supporting external research into the present and future capabilities of dual-use foundation models and risk mitigations; and  
  • Maintaining a set of risk-specific indicators.

Evaluating Evidence

  • Developing and maintaining thresholds of the risk-specific indicators to signal a potential change in policy;
  • Continuing to reassess and tailor benchmarks and definitions for monitoring and action; and  
  • Maintaining professional capabilities in technical, legal, social science, and policy domains to support the evaluation of evidence.

Acting on Evidence  

If a future evaluation of evidence leads to the determination that further action is needed, the government could:

  • Place restrictions on access to models; or  
  • Engage in other risk mitigation measures as deemed appropriate (e.g., restricting access to material components in the case of bio-risk concerns, working with international partners to set norms, investing in model research).