Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Information Flow

March 27, 2024
Earned Trust through AI System Assurance

One of the challenges with assuring AI trustworthiness is that AI systems are complex and often opaque. As a result, information asymmetries and gaps open along the value chain from developers to deployers and ultimately to end users and others affected by AI operations. Downstream deployers of AI systems may lack information they need to use the systems appropriately in context and to communicate system features to others. For example, an employer relying on an AI system to assist in hiring decisions might need to know if the population data used to train the system are sufficiently aligned with its own applicant pool and how underlying assumptions have been designed to guard against bias.77

The information asymmetry runs the other way as well. AI system developers may lack information about deployment contexts and therefore make inaccurate claims about their products or fail to communicate limitations. For example, to mitigate downstream harms, the developer of an AI image generator would need information about later adaptations and adverse incidents to address the risks posed by deepfakes at scale.78

Individuals affected by, or consuming, AI outputs may not even be aware that an AI system is at work, much less how it works. This lack of information may hinder people from asserting rights under existing law, exercising their own critical judgement, or pushing for other forms of redress. Lack of information about AI system vulnerabilities and potential harms can also expose investors to risk.79

Transparency, explainability, and interpretability can all be helpful to assess the trustworthiness of a model and the appropriateness of a given use of that model. These features of an AI system involve communicating about what the system did (transparency), how the system made its decisions (explainability), and how one can make sense of system outputs (interpretability).80 All three are part of information flow, as is information regarding the organizational and governance processes involved in designing, developing, deploying, and using models.

It is clear that information flow is a critical input to AI accountability.81 Provisions to ensure appropriate information flow, including through accessible and plain language formats, are featured in the White House Voluntary Commitments,82 in the Blueprint for AIBoR,83 and in the AI EO.84 Similarly, the OECD Principles for Responsible Stewardship of Trustworthy AI state that AI actors “should commit to transparency and responsible disclosure regarding AI systems.”85

Specifically, these actors

[S]hould provide meaningful information, appropriate to the context and consistent with the state of art:

  1. to foster a general understanding of AI systems,
  2. to make stakeholders aware of their interactions with AI systems including in the workplace,
  3. to enable those affected by an AI system to understand the outcome, and,
  4. to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.86

Commenters are in broad agreement that more information about AI systems is needed, with some asserting that there may be tradeoffs between transparency and other values.87 There was a range of commenter opinion about documentation and disclosure details and standardization. For example, some wanted the adoption of common standards.88 Others emphasized the need for audience-specific disclosures89 or domain-specific reporting.90 While acknowledging that distinct regimes are probably appropriate for different AI use cases, this Report addresses generic features of information creation, collection, and distribution desirable for a wide swath of AI systems (with additional recommendations for high-risk models and systems).

Information flow as an input to AI accountability comes in two basic forms: push and pull. AI actors can push disclosures out to stakeholders and stakeholders can pull information from AI systems, via system access subject to valid intellectual property, privacy, and security protections. This Report recommends a mix of push and pull information flow, some of which should be required and some voluntarily assumed. Because AI systems are continuously updated and refined, information pushed out (e.g., reports, model cards) should also be continuously updated and refined. Similarly, access to AI system components may need to be ongoing.

 


77 The Institute for Workplace Equality Comment, Artificial Intelligence Technical Advisory Committee Report on EEO and DEI&A Considerations in the Use of Artificial Intelligence in Employment Decision Making, at 46 (“The fundamental issue of model drift is that some underlying assumption about the data used to train an algorithm has changed. Applicants differ from incumbents; applicant characteristics shift over time; or the job requirements themselves change, leading to different response patterns, demographic compositions, or performance standards... the applicant population often differs from the original incumbent population due to selection effects, and the algorithm should be adjusted when enough applicant data are collected and the applicants are hired so that the criterion data are available”); HRPA Comment at 2 (“[A] failure to guard against harmful bias in talent identification algorithms could undermine efforts to create a skilled and diverse workforce.”); Workday Comment at 2 (“When an AI tool is used for a decision about an individual’s access to an essential opportunity, it has the potential to pose risks of harm to that individual. AI frameworks should therefore focus on these kinds of consequential decision tools, which may be used to hire, promote, or terminate an individual’s employment.”).

78 The information gap between developers and deployers may be particularly large in the case of foundation models. See Information Technology Industry Council (ITI), Understanding Foundation Models & The AI Value Chain: ITI’s Comprehensive Policy Guide (Aug. 2023), at 7 (“A deployer (sometimes also called a provider) is the entity that is deciding the means by and purpose for which the foundation model is ultimately being used and puts the broader AI system into operation. Deployers often have a direct relationship with the consumer. While developers are best positioned to assess, to the best of their ability, and document the capabilities and limitations of a model, deployers, when equipped with necessary information from developers, are best positioned to document and assess risks associated with a specific use case.”).

79 See, e.g., Open MIC Comment at 5 (“Without information about how companies are developing and using AI and the extent to which it is working properly, investors are essentially left to trust the marketing claims of the companies they invest in”) and 8 (“To engender investor confidence in AI, government intervention is needed to… increase transparency on how AI models are being trained and deployed.”).

80 NIST AI RMF at 16-17.

81 See Guardian Assembly Comment at 4 (“Transparency in AI is about ensuring that stakeholders have access to relevant information about an AI system. [. . .] Transparency helps to facilitate accountability by enabling stakeholders to understand and assess an AI system’s behavior.”); Anthropic Comment at 17 (“Accountability requires a commitment to transparency and a willingness to share sensitive details with trusted, technically-proficient partners”); Jeffrey M. Hirsch, “Future Work,” 2020 U. Ill. L. Rev. 889, 943 (2020) (“The lack of transparency in most AI analyses is a serious cause for concern. Because AI learns through complicated, iterative analyses of data, the bases for a program’s decision-making is often unclear. This lack of transparency, often referred to as the “black box” problem, could act as a mask for discrimination or other results that society deems unacceptable.”).

82 See First Round White House Voluntary Commitments at 4 (committing to publishing “reports for all new significant model public releases within scope [which reports] should include the safety evaluations conducted (including in areas such as dangerous capabilities, to the extent that these are responsible to publicly disclose), significant limitations in performance that have implications for the domains of appropriate use, discussion of the model’s effects on societal risks such as fairness and bias, and the results of adversarial testing conducted to evaluate the model’s fitness for deployment.”).

83 Blueprint for AIBoR at 6 (framing transparency in terms of “notice and explanation”: “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.”).

84 AI EO passim.

85 OECD, Recommendation of the Council on Artificial Intelligence, Section 1.3 (2019); See also OECD AI Policy Observatory, OECD AI Principles Overview.

86 Id.

87 See, e.g., Google DeepMind Comment at 8-9; Georgetown University Center for Security and Emerging Technology Comment at 5 (noting tradeoffs between privacy and transparency and between fairness and accuracy); DLA Piper Comment at 12 (“More transparency about system logic/data may improve contestability but infringe on privacy and intellectual property rights….The more transparent a model is[,] the more susceptible it is to bad actor manipulation.”); International Center for Law & Economics (ICLE) Comment at 10 (“Surely there will be many cases where firms use their own internal data, or data not subject to property-rights protection at all, but where exposing those sources reveals sensitive internal information, like know-how or other trade secrets. In those cases, a transparency obligation could have a chilling effect.”).

88 See, e.g., AI Policy and Governance Working Group Comment at 3.

89 See, e.g., CDT Comment at 41-42 (citing to CDT Civil Rights Standards for 21st Century Employment
Selection Procedures); Databricks Comment at 2, 5 (“[T]he deployer is the party exposing people to the application and creating the potential risk” and thus “any obligation to inform people about how such tools are operating should rest with the…deployer.”).

90 See, e.g., American Federation of Teachers (AFT) Comment at 2 (“Regulations around classroom AI should…mandate transparency in the AI system’s decision-making processes. They must allow teachers, students and parents to review and understand how AI decisions are affecting teaching and learning.”).