Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Glossary

Earned Trust through AI System Assurance

This Report uses the following definitions, many of which arise from the definitions in Executive Order 14110:

 

Artificial intelligence (AI) -  means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to:

  • perceive real and virtual environments;
  • abstract such perceptions into models through analysis in an automated manner; and
  • use model inference to formulate options for information or action. 15 U.S.C 9401(3).5

An AI model -  is a component of an information system that implements AI technology and uses computation al, statistical, or machine-learning techniques to pro duce outputs from a given set of inputs.6

A dual-use foundation model   means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

  1. substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical biological, radiological, or nuclear (CBRN) weapons;
  2. enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
  3. permitting the evasion of human control or over sight through means of deception or obfuscation7

Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant un safe capabilities.8

A dual-use foundation model with widely available model weights (or, in this Report, an open foundation model) is a dual-use foundation model whose model weights have been released openly to the public, either by allowing users to download them from the Internet or through other mechanisms.

The term foundation model  is used synonymously with the term dual-use foundation model in this Report. However, the term has been used more broadly in the AI community, notably without the “tens of billions of parameters” requirement.9 Further, all foundation models do not necessarily display “dual-use” capabilities.

Model weights  are “numerical parameters within an AI model that helps determine the model’s output in response to inputs.”10 There are multiple types of weights, including the pre-trained model weights, weights from intermediate checkpoints, and weights of fine-tuned models.

 

 


5 Exec. Order No. 14,110 (2023).

6 Exec. Order No. 14,110 (2023).

7 This provision of Executive Order 14110 refers to the as-yet speculative risk that AI systems will evade human control, for instance through deception, obfuscation, or self-replication. Harms from loss of control would likely require AI systems which have capabilities beyond those known in current systems and have access to a broader range of permissions and resources than current AI systems are given. However, open models introduce unique considerations in these risk calculations, as actors can remove superficial safeguards that prevent model misuse. They can also customize, experiment with, and deploy models with more permissions and in different contexts than the developer originally intended. Currently, AI agents who can interact with the world lack capabilities to independently perform complex or open-ended tasks, which limits their potential to create loss of control harms. Hague, D. (2024). Multimodality, Tool Use, and Autonomous Agents: Large Language Models Explained, Part 3. Center for Security and Emerging Technology. (“While LLM agents have been successful in playing Minecraft and interacting in virtual worlds, they have largely not been reliable enough to deploy in real-life use cases. Today, research often focuses on getting autonomous LLMs to perform specific, defined tasks like booking flights.”). Developing capable AI agents remains an active research goal in the AI community. Xi, Z., et al. (2023). The Rise and Potential of Large Language Model Based Agents: A Survey. ArXiv. However, given the nascent stage of these efforts, this Report cannot yet meaningfully discuss these risks in greater depth.

8 Exec. Order No. 14,110 (2023).

9 Bommasani, R., Hudson, D.A., Adeli, E. et al (2021). On the opportunities and risks of foundation models. ArXiv preprint.

10 Exec. Order No. 14,110 (2023).