Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.


The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Standardize Evaluations As Appropriate

March 27, 2024
Earned Trust through AI System Assurance

Commenters noted the importance of using standards to develop common criteria for evaluations.54 The use of standards in evaluations is important to implement replicable and comparable evaluations. Commenters acknowledged, as does the NIST AI RMF, that there may be tradeoffs between accountability inputs such as disclosure, and other values such as protecting privacy, intellectual property, and security.55 In other words, AI actors may have to prioritize risks and values. The record surfaced interest in having additional governmental guidance for AI actors on how to address such tradeoffs.56

In NTIA’s view, more research is necessary to create common (or at least commonly legible, comparable, and replicable) evaluation methods. Therefore, standards development is critical, as recognized in the AI EO, which tasks “the Secretary of Commerce, in coordination with the Secretary of State,” with leading “a coordinated effort with key international partners and with standards development organizations, to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.”57



54 See, e.g., Center for Audit Quality (CAQ) Comment at 7 (“[W]e believe that it is important to similarly establish AI safety standards which could serve as criteria for the subject matter of an AI assurance engagement to be evaluated against”); Salesforce Comment at 11 (“If definitions and methods were standardized, audits would be more consistent and lead to more confidence. This will also be necessary if third party certifications are included in future regulations.”).

55 See NIST AI RMF at Sections 2 and 3. See also Google DeepMind Comment at 8-9 (suggesting there are tradeoffs between data minimization and the accuracy of systems; transparency and model accuracy; and transparency and security); OpenMined Comment at 4 (noting that if “an auditor obtains access to underlying data, privacy, security, and IP risks are significant and legitimate.”); Mastercard Comment at 3 (“There can be tension between accountability goals that lead to technical tradeoffs, and we believe organizations are best suited to evaluate these tradeoffs and document related decisions. […] Transparency is another example of an AI accountability goal that can be in tension with countervailing interests. Several federal legislative and regulatory proposals contemplate or include transparency provisions. While transparency is a cornerstone in trustworthy AI, it must be balanced with the need to ensure intellectual property and proprietary information remain protected, and that malicious actors are not encouraged to bypass AI-powered protections such as fraud prevention.”); Kathy Yang Comment at 3 (“There is a tradeoff between more complete data and other priorities like privacy and security”).

56 See e.g., Credo AI at 3 (recommending development of a “taxonomy of AI risk to inform the areas that are most important for an AI developer or deployer to consider when assessing its AI system’s potential impact”); AI Policy and Governance Working Group Comment at 3-4 (calling for AI evaluations that consider risks drawn from a regularly evaluated and updated risk taxonomy developed by the “research and policy communities).

57 AI EO at Sec. 11(b). See also id. at Sec. 4.1(a)(i) (tasking the Secretary of Commerce with establishing “guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure and trustworthy AI systems.”); NIST, U.S Leadership in AI: A Plan for Federal Engagement in Developing AI Technical Standards and Related Tools.