Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Ensure Accountability Across the AI Lifecycle and Value Chain

March 27, 2024
Earned Trust through AI System Assurance

Various actors in the AI value chain exercise different degrees and kinds of control throughout the lifecycle of an AI system. Upstream developers design and create AI models and/or systems. Downstream deployers then deploy those models and/or systems (or use the models as part of other systems) in particular contexts. The downstream deployers may also fine tune a model, thereby acting as downstream developers of the deployed systems. Both upstream developers and downstream deployers of AI systems should be accountable; existing laws and regulations may already specify accountability mechanisms for different actors.

Commenters laid out good reasons to vest accountability with AI system developers who make critical upstream decisions about AI models and other components. These actors have privileged knowledge to inform important disclosures and documentation and may be best positioned to manage certain risks. Some models and systems should not be deployed until they have been independently evaluated.41 At the same time, there are also good reasons to vest accountability with AI system deployers because context and mode of deployment are important to actual AI system impacts.42 Not all risks can be identified pre-deployment, and downstream developers/deployers may fine tune AI systems either to ameliorate or exacerbate dangers present in artifacts from upstream developers. Actors may also deploy and/or use AI systems in unintended ways.

Recognizing the fluidity of AI system knowledge and control, many commenters argued that accountability should run with the AI system through its entire lifecycle and across the AI value chain, lodging responsibility with AI system actors in accordance with their roles.44 This value chain of course includes actors who may be neither developers nor deployers, such as users, and many others including vendors, buyers, evaluators, testers, managers, and fiduciaries.

Just as AI actors share responsibility for the trustworthiness of AI systems, we think it clear from the comments that they must share responsibility for providing accountability inputs. As part of the chain of accountability, there should be information sharing from upstream developers to downstream deployers about intended uses, and from downstream deployers back to upstream developers about refinements and actual impacts so that systems can be adjusted appropriately. Mechanisms discussed below such as adverse AI incident reports, AI system audits, public disclosures, and other forms of information flow and evaluation could all help with allocations of responsibility for trustworthy AI – allocations that will require attention and elaboration elsewhere.

 

 


41 See Office of Management and Budget, Proposed Memorandum for the Heads of Executive Departments and Agencies,  “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” (Nov. 2023), at 16 [hereinafter “OMB Draft Memo”]. Some commenters focused particularly on pre-release evaluation for emergent risks. See, e.g., ARC Comment at 8 (“It is insufficient to test whether an AI system is capable of dangerous behavior under the terms of its intended deployment. Thorough dangerous capabilities evaluation must include full red-teaming, with access to fine-tuning and other generally available specialized tools.”); SaferAI Comment at 2 (Some of the measures that AI labs should conduct to help mitigate AI risks are: “pre-deployment risk assessments; dangerous capabilities evaluations; third-party model audits; safety restrictions on model usage; red-teaming”).

42 See, e.g., Center for Data Innovation Comment at 7 (“[R]egulators should focus their oversight on operators, the parties responsible for deploying algorithms, rather than developers, because operators make the most important decisions about how their algorithms impact society.”).

43 NIST AI RMF, Second Draft, at 6 Figure 2 (Aug. 18, 2022) (describing the AI lifecycle in seven stages: planning and design, collection, and processing of data, building and training the model, verifying and validating the model, deployment, operation and monitoring, and use of the model/impact from the model).

44 See, e.g., ARC Comment at 8 (suggesting that because an AI system’s risk profile changes with actual deployments “[i]t is insufficient to test whether an AI system is capable of dangerous behavior under the terms of its intended deployment..”); Boston University and University of Chicago Researchers Comment at 1 (“mechanisms for AI monitoring and accountability must be implemented throughout the lifecycle of important AI systems…”); See also Center for Democracy & Technology (CDT) Comment at 26 (“Pre-deployment audits and assessments are not sufficient because they may not fully capture a model or system’s behavior after it is deployed and used in particular contexts.”). See also, e.g., Murat Kantarcioglu Comment (individual comment suggesting that “AI accountability mechanisms should cover the entire lifecycle of any given AI system”).