Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Developing Accountability Inputs: A Deeper Dive

March 27, 2024
Earned Trust through AI System Assurance

Our analysis now turns to the first two links in the AI accountability chain – what we are calling accountability inputs. These are roughly

  1. the creation, collection, and distribution of information about AI systems and system outputs, and
  2. AI system evaluation.

The RFC and commenters identified proposed or adopted laws that address AI accountability inputs, both in the United States and beyond.75 Congress continues to consider relevant legislative initiatives, and the states are actively pursuing their own legislative agendas.76 Many of these policy initiatives focus on information flow and evaluations, as well as associated governance processes.

The sections below address these topics and come to some preliminary conclusions that feed into the Recommendations section.

 

  • Information Flow

    Information flow as an input to AI accountability comes in two basic forms: push and pull. AI actors can push disclosures out to stakeholders and stakeholders can pull information from AI systems, via system access subject to valid intellectual property, privacy, and security protections.

  • AI System Evaluations

    Transparency and disclosures regarding AI systems are primarily valuable insofar as they feed into accountability.  One essential tool for converting information into accountability is critical evaluation of the AI system.

  • Ecosystem Reqs

    Research drawing on auditing experiences across sectors, including pharmaceuticals and aviation, “strongly supports training, standardization, and accreditation for third-party AI auditors.

 


75 AI Accountability RFC at 22435. See, e.g., EPIC Comment at 5-8; Salesforce Comment at 8-11. See also Anna Lenhart, Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI Tools, The George Washington University Institute for Data, Democracy, and Politics (June 14, 2023); European Union, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119 (May 4, 2016).

76 See supra notes 13 and 14.