Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Glossary of Terms

March 27, 2024
Earned Trust through AI System Assurance

The process of defining terms in the AI policy space is ongoing and fluid. Where there are existing U.S. government or other consensus definitions, we use them. Where there are not, we use definitions we find supported by the record and research.

 

Artificial Intelligence or AI - AI has the meaning set forth in 15 U.S.C. 9401(3), which is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

AI Accountability - AI accountability is the process, heavily reliant on transparency and assurance practices, of holding entities answerable for the risks and/or harms of the AI systems they develop or deploy. This is closest to the definition adopted by the Trade and Technology Council (TTC) joint U.S.-EU set of AI terms, which defines accountability as an “allocated responsibility” for system performance or for governance functions.386 Whereas OECD interpretive guidance distinguishes “accountability” from “responsibility” and “liability,”387 the TTC definition embraces responsibility as part of accountability and includes a broader scope of governance activities.388 Accountability may require enforceable consequences.389 Such consequences, usually determined by regulators, courts, and the market, are accountability outputs. This Report focuses on developing and shaping “accountability inputs,” which feed into systems of accountability.

AI Accountability Inputs - AI accountability inputs are the AI system information flows and evaluations that enable the identification of entities, factors, and systems responsible for the risks and/or harms of those systems. These are necessary or useful practices, artifacts, and products that feed into downstream accountability mechanisms such as regulation, litigation, and market choices.

AI Actor - AI actors are “those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI.”390 AI actors are present across the AI lifecycle, including “an AI developer who makes AI software available, such as pre-trained models” and “an AI actor who is responsible for deploying that pre-trained model in a specific use case.”391

AI Assurance - AI assurance is the product of a set of informational and evaluative practices that can provide justified confidence that an AI system operates in context in a trustworthy fashion and as claimed. This definition draws from MITRE’s use of the term “justified confidence” (from international software assurance standards)392 and the UK Centre for Data Ethics and Innovation usage in its “roadmap to an effective AI assurance ecosystem.”393

AI Audit - An AI audit is, with respect to an AI system or model, an evaluation of performance and/or process against transparent criteria.394 An audit is broader than a “conformity assessment” which is “the demonstration that specified requirements relating to a product, process, system, person or body are fulfilled.”395 As noted in the RFC, entities can audit their own systems or models, be audited by a contracted second party, or be audited by a third party. To distinguish audits from other evaluations, we use the term audit to refer only to independent evaluations.396 An audit can be structured merely to verify the claims made about AI.397 Alternatively, it can be scoped more broadly to evaluate AI system or model performance vis a vis attributes of trustworthy AI, regardless of claims made. Simply put, an audit is an assurance tool, characterized by precision and providing an independent evaluation of an AI system, claims made about that system, and/or the degree to which that system is trustworthy. For ease of reading, we include audits in the umbrella term “evaluations.”

AI Model - AI model means a component of an AI system that implements AI technology and uses computational, statistical, or machine learning techniques to produce outputs from a given set of inputs.

AI System - An AI system is an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

Red-Teaming - Red-teaming means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI. AI red-teaming is most often performed by dedicated “red-teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations or potential risks associated with the misuse of the system.398

Trustworthy AI - The NIST AI RMF defines trustworthiness in AI as “responsive[ness] to a multiplicity of criteria that are of value to interested parties.” It specifies that such values include “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.”399 The White House Voluntary Commitments specify that “trust,” together with “safety” and “security,” comprise the “three principles that must be fundamental to the future of AI.”400

 

 


386 See U.S.-E.U. Trade and Technology Council (TTC), EU-U.S. Terminology and Taxonomy for Artificial Intelligence (May 31, 2023), at 11 (“Accountability relates to an allocated responsibility. The responsibility can be based on regulation or agreement or through assignment as part of delegation. In a systems context, accountability refers to systems and/or actions that can be traced uniquely to a given entity. In a governance context, accountability refers to the obligation of an individual or organization to account for its activities, to complete a deliverable or task, to accept the responsibility for those activities, deliverables or tasks, and to disclose the results in a transparent manner.”). See also NIST, The Language of Trustworthy AI: An In-Depth Glossary of Terms (March 22, 2023) (referencing National Institute of Standards and Technology, Trustworthy & Responsible AI Resource Center, Glossary) (substantially similar). Cf. NIST AI RMF, Second Draft, supra note 43 at 15 (“Determinations of accountability in the AI context relate to expectations of the responsible party in the event that a risky outcome is realized.”).

387 OECD.AI Policy Observatory, Accountability (Principle 1.5) (“‘accountability’ refers to the expectation that organisations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks, and for demonstrating this through their actions and decision-making process (for example, by providing documentation on key decisions throughout the AI system lifecycle or conducting or allowing auditing where justified)”).

388 See Software & Information Industry Association Comment at 3 (embracing the TTC definition and its view that “AI accountability is concerned with both system-level performance and with governance structures relevant to the development and deployment of AI systems”).

389 See, e.g., Ada Lovelace Institute Comment at 2 (“assessments are themselves not a form of accountability”); Price Waterhouse Cooper (PWC) Comment at A2 (using dictionary definitions to equate accountability with responsibility).

390 NIST AI RMF at 2 (citing with approval OECD, Artificial Intelligence in Society (2019)).

391 NIST AI RMF at 6.

392 MITRE Comment at 9 (“AI assurance is a lifecycle process that provides justified confidence in an AI system to operate effectively with acceptable levels of risk to its stakeholders. Effective operation entails meeting functional requirements with valid outputs. Assurance risks may be associated with or stemming from a variety of factors depending on the use context, including but not limited to AI system safety, security, equity, reliability, interpretability, robustness, directability, privacy, and governability.”). See also ISO/IEC/IEEE International Standard – Systems and Software Engineering – Systems and Software Assurance, IEEE/ISO/IEC 15026-1 (2019).

393 Centre for Data Ethics and Innovation, The roadmap to an effective AI assurance ecosystem (Dec. 8, 2021) (“Assurance services help people to gain confidence in AI systems by evaluating and communicating reliable evidence about their trustworthiness.”).

394  National Institute of Standards and Technology, supra note 43 at 15.

395 NIST, Conformity Assessment Basics (2016).

396 See, e.g., Jakob Mökander, Jonas Schuett, Hannah Rose Kirk, and Luciano Floridi, Auditing Large Language Models: A Three-Layered Approach, AI and Ethics (May 30, 2023) (“Auditing is characterised by a systematic and independent process of obtaining and evaluating evidence regarding an entity’s actions or properties and communicating the results of that evaluation to relevant stakeholders.”).

397 See, e.g., Trail of Bits Comment at 1. See also Data & Society Comment at 2 (it is the “study of the functioning of a system within the parameters of the system.” It asks whether the system functions “appropriately according to a claim made by the developer, according to an independent standard…, according to the terms set in a contract, or according to ethical or scientific terms established by a researcher or field of researchers.”); Holistic AI Comment at 4-5 (“External audits offer yet another level of system assurance through the process of independent and impartial system evaluation 5 of 11 whereby an auditor with no conflict of interest can assess the system’s reliability and in turn identify otherwise unidentified errors, inconsistencies and/or vulnerabilities.”).

398 AI EO at Sec. 3(d).

399 NIST AI RMF at 12 (recognizing tradeoffs and that these characteristics must be balanced “based on the AI system’s context of use”). See also Executive Order No. 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, 85 Fed. Reg. 78939 (2020) (articulating the following principles for AI use: “[l]awful and respectful of our Nation’s values,” “[p]urposeful and performance-driven,” “[a]ccurate, reliable, and effective,” “[s]afe, secure, and resilient,” “[u]nderstandable,”, “[r]esponsible and traceable”, “[r]egularly monitored”, “[t]ransparent”, and “[a]ccountable”).

400 See First Round White House Voluntary Commitments at 1 (“These commitments – which the companies are making immediately – underscore three principles that must be fundamental to the future of AI: safety, security, and trust.”).