Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Calibrate Accountability Inputs to Risk Levels

March 27, 2024
Earned Trust through AI System Assurance

Commenters generally support calibrating AI accountability inputs to scale with the risk of the AI system or application.37 As many acknowledge, existing work from NIST, the Organization for Economic Cooperation and Development (OECD), the Global Partnership on Artificial Intelligence, and the European Union (e.g., the EU AI Act), among others, have established robust frameworks to map, measure, and manage risks. In the interest of risk-based accountability, one commenter, for example, suggested a “baseline plus” approach: all models and applications are subject to some baseline standard of assurance practices across sectors and higher risk models or applications have an additional set of obligations.38

NTIA concludes that a tiered approach to AI accountability has the benefit of scoping expectations and obligations proportionately to AI system risks and capabilities. As discussed below, many commenters argued that safety-impacting or rights-impacting AI systems deserve extra scrutiny because of the risks they pose of creating serious harm. Another kind of tiering ties AI accountability expectations to how capable a model or system is. Commenters suggested that highly capable models and systems may deserve extra scrutiny, which could include requirements for pre-release certification and capability disclosures to government.39 This kind of tiering approach is evident, for example, in the AI EO requirement that developers of certain “dual-use foundation models” more capable than any yet released would have to make disclosures to the federal government.40

 

 


37 See, e.g., University of Illinois Urbana-Champaign School of Information Sciences Researchers (UIUC) Comment at 8 (“…tiered systems match an AI system’s risk with an appropriate level of oversight… The result is a more tailored and proportionate regulation of fast evolving AI systems…”); Przemyslaw Grabowicz et al., Comment at 11 (“AI systems represent too many applications for a single set of rules. Just as different FDA restrictions are applied to different medications, AI controls should be tailored to the application.”); Institute of Electrical and Electronics Engineers (IEEE) Comment at 13 (“When the integrity level increases, so too does the intensity and rigor of the required verification and validation tasks); AI & Equality Comment at 3 (“The transparency and accountability requirements should also be tailored and calibrated according to the amount of risk presented by the specific sector or domain in which the AI system is being deployed...”); Palantir Comment at 7 (appropriate accountability mechanisms depends on the AI use context and risk profile); Securities Industry and Financial Markets Association (SIFMA) Comment at 4 (focus auditing on high-risk AI application such as “hiring, lending, insurance underwriting, and education admissions”); Bipartisan Policy Center (BPC) Comment at 2 (urging risk-based accountability systems); NCTA Comment at 6; Consumer Technology Association Comment at 2; Centre for Information Policy Leadership Comment at 4; Workday Comment at 1; Adobe Comment at 7; BSA | The Software Alliance Comment at 2; Intel Comment at 5-7; Developers Alliance Comment at 6; Salesforce Comment at 4; Guardian Assembly Comment at 12-14; American Property Casualty Insurance Association Comment at 2; Samuel Hammond, Foundation for American Innovation Comment at 2; Anan Abrar Comment at 1.

38 Guardian Assembly Comment at 12.

39 See, e.g., Center for AI Safety Comment Appendix A – A Regulatory Framework for Advanced Artificial Intelligence (proposing regulatory regime for frontier models that would require pre-release certification around information security, safety culture, and technical safety); OpenAI Comment at 6 (considering a requirement of pre-deployment risk assessments, security and deployment safeguards); Microsoft Comment at 7 (regulatory framework based on the AI tech stack, including licensing requirements for foundation models and infrastructure providers); Anthropic at 12 (confidential sharing of large training runs with regulators); Credo AI Comment at 9 (Special foundation model and large language model disclosures to government about models and processes, including AI safety and governance); Audit AI Comment at 8 (“High-risk AI systems should be released with quality assurance certifications based on passing and maintaining ongoing compliance with AI accountability regulations.”); Holistic AI Comment at 9 (high-risk systems should be released with certifications)..

40 AI EO at Sec. 4.2(i).