Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Learning From Other Models

March 27, 2024
Earned Trust through AI System Assurance

The RFC asked what accountability policies adopted in other domains might be useful precedents for AI accountability policy. Commenters addressed this question in detail.

  • Financial Assurance

    The assurance system for financial accounting is an obvious referent for AI assurance. Some existing financial sector laws may be directly applicable to AI. Otherwise, they may still furnish useful analogies. In other words, as one commenter stated, “the established financial reporting ecosystem provides a valuable skeleton and helpful scaffolding for the key components needed to establish an AI accountability framework.”

 

  • Human Rights and ESG Assessments
     

    The private sector continues to refine and seek ESG framework standardization for evaluations. What the ESG assurance experience may teach is that multi-factored evaluations using a variety of standards may not immediately yield comparable or actionable results.

  • Food and Drug Regulation
     

    Another potentially useful accountability model suggested by commenters can be found in health-related regulatory frameworks such as the FDA’s. FDA regulates some AI systems as medical devices.

  • Cybersecurity and Privacy Accountability Mechanisms

    We recommend that future federal AI policymaking not lean entirely on purely voluntary best practices. Rather, some AI accountability measures should be required, pegged to risk.