Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Proof of Claims and Trustworthiness

March 27, 2024
Earned Trust through AI System Assurance

AI actors are putting AI systems out into the world and should be responsible for proving that those systems perform as claimed and in a trustworthy manner. Accountable Tech, AI Now, and EPIC’s Zero Trust AI Governance Framework puts it this way: “Rather than relying on the good will of companies, tasking under-resourced enforcement agencies or afflicted users with proving and preventing harm, or relying on post-market auditing, companies should have to prove their AI offerings are not harmful.”219 This responsibility for assuring the validity of system claims and trustworthiness should be ongoing throughout the lifecycle of the AI system.220

An independent certification process for some AI systems could be one way for entities to implement proof of claims and trustworthiness. According to one definition, a certification is the “process of an independent body stating that a system has successfully met some pre-established criteria.”221 Thus an independent evaluation would be a prerequisite for a certification. A voluntary certification regime for AI systems, if sufficiently rigorous and independent, could help stakeholders navigate the AI market and promote competition around trustworthy AI.222

Mandatory pre-release certification – taking the form of licensing – is another route. Given the prospect of AI becoming embedded ubiquitously in products and processes, it would be impractical to mandate certification for all AI systems.223 There was strong support from some commentators for governmental licensing of high-risk foundation models, or at least deep review of such models, before deployment (including the need to show that certain “safety” conditions are met), usually as a way of addressing alleged catastrophic risks.224

However, there is also a concern that mandatory pre-release certification or licensing can hurt competition by advantaging incumbents.225 Therefore, the benefits of requiring ex ante proof of trustworthiness have to be balanced against facilitating easy entry into the AI market.

 


219 Accountable Tech, AI Now, and EPIC, supra note 50, at 5 (emphasis omitted). See also Association for Intelligent Information Management Comment at 7 (“If an entity uses Generative AI and other high-risk products or services and cannot identify or explain the reasons behind the decision the AI system has made, that liability is and should be on the entity”). See generally Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, Andrew Selbst, “The Fallacy of AI Functionality,” Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22), June 2022, 959–972 (discussing the burden of proof issues particularly with respect to the basic functionality of an AI system).

220 See CDT Comment at 26 (“Pre-deployment audits and assessments are not sufficient because they may not fully capture a model or system’s behavior after it is deployed and used in particular contexts.”).

221 See Data & Society Comment at 2.

222 See, e.g., Friedman, et al., supra note 73, at 707 (“Certification could impose substantive ethical standards and create an incentive for vendors to compete along ethical lines.”)

223 See Trail of Bits Comment at 5 (stating that a generalized licensing scheme targeting AI systems would impede software use because AI systems are broadly defined in such a way that is not unique from other software systems.)

224 See, e.g., OpenAI Comment at 6 (“We support the development of registration and licensing requirements for future generations of the most highly capable foundation models ... AI developers could be required to receive a license to create highly capable foundation models which are likely to prove more capable than models previously shown to be safe.”); Governing AI, supra note 47, at 20-21 (“[W]e envision licensing requirements such as advance notification of large training runs, comprehensive risk assessments focused on identifying dangerous or breakthrough capabilities, extensive prerelease testing by internal and external experts, and multiple checkpoints along the way”); Center for AI Safety Comment Appendix A (proposing a regulatory regime for “powerful” AI systems that would require pre-release certification around information security, safety culture, and technical safety); AI Policy and Governance Working Group Comment at 9 (recommending that “responsible disclosure become a prerequisite in government regulations for certifying trustworthy AI systems, aligning with practices exemplified by Singapore’s AI Verify.”); SaferAI Comment at 3 (“Because [general-purpose AI systems] are 1) extremely costly to train and 2) can be dangerous during training, we believe that most of the risk assessment should happen before starting the training.”) (emphasis omitted); Campaign for AI Safety Comment at 2 (supporting “pre-deployment safety evaluations.”).

225 See Engine Comment at 4 (A mandatory certification licensing system “is likely to create a ‘regulatory moat’ bolstering the position and power of large companies that are already established in the AI ecosystem, while making it hard for startups to contest their market share.”); Grabowicz et al., Comment at 1 (“Overregulation (e.g., mandatory licensing to develop AI technologies) would frustrate the development of trustworthy AI, since it would primarily inhibit smaller independent AI system manufacturers from participating in AI development.”); Generally Intelligent Comment at 4 (noting that requiring “at this stage” licensing of AI systems “will make it much harder for new entrants and smaller companies to develop AI systems, while its intended goals can be achieved with other policy approaches”). See also ICLE Comment at 18 (“The notion of licensing implies that companies would need to obtain permission prior to commercializing a particular piece of code. This could introduce undesirable latency into the process of bringing AI technologies to market (or, indeed, even of correcting errors in already-deployed products).”).