Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Regulatory Enforcement

March 27, 2024
Earned Trust through AI System Assurance

Regulators are increasingly facing complex technical systems with varying degrees of autonomy whose “conduct” may be difficult to parse and predict. AI systems will often be integrated into a wide range of other technologies across critical infrastructure sectors, some of which (e.g., transportation safety) have well-developed regulatory regimes. Experts observe that regulatory tools and capacities have not kept pace with AI developments.309 Commenters discussed how regulation does or should intersect with AI systems, including the need for clarity and new regulatory tools or enforcement bodies.310 Opacity can make it difficult for regulators to enforce legal requirements for trustworthy AI, and several federal regulatory authorities have recently pointed to the “black box” nature of some automated systems as a problem in determining whether automated systems are fair and legally compliant.311

Again, without commenting on the precise structure of enforcement, we posit that regulators of all types will have an easier job enforcing law and regulations if there is greater information flow around, and better evaluations of, AI systems. As these questions are considered in many arenas and regulators more forcefully tackle AI harms, the accountability inputs addressed in this Report can help to build the records needed for sound administration and law enforcement. The same is true of the recommendations to build the accountability ecosystem, including by funding capacity within the federal government.

Accountability inputs help shine a light on practices that should be subject to regulatory oversight and equip regulators with the information and knowledge they need to apply their respective bodies of law.312 As with clarity on liability, clarity about regulatory enforcement can benefit parties along the value chain, including by helping everyone understand what is required for compliance and the broader achievement of trustworthy AI.

 

 


309 See, e.g., Alex Engler, “A Comprehensive and Distributed Approach to AI Regulation,” Brookings Institution (Aug. 31, 2023) (“Many agencies lack critical capacity regarding algorithmic oversight, including: the authority to require entities to retain data, code, models, and technical documentation; the authority to subpoena those same materials; the technical ability to audit [the systems]; and the legal authority to set rules for their use.”).

310 See supra Sec. 2.4. See also Anthropic Comment at 19 (“Clarity on antitrust regulation would help determine whether and how AI labs can coordinate on safety standards. Sensible coordination around consumer-friendly standards seems possible, but regulators’ guidance on the issue would be welcome.”) (internal emphasis omitted); Shaping the Future of Work Comment at 7 (“These issues and impacts [related to generative AI technology] do not require that our regulatory framework start from scratch, but instead require appropriate application of existing frameworks for robots, automation, internet, and other digital technologies.”).

311 See Joint Statement on Enforcement Efforts, supra note 11, at 3 (“Many automated systems are ‘black boxes’ whose internal workings are not clear to most people and, in some cases, even the developer of the tool. This lack of transparency often makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair.”).

312 See, e.g., Public Knowledge Comment at 3 (“Transparency could involve [. . .] enabling regulators to thoroughly examine models, even when trade secrets or intellectual property laws protect them.”); AI Impacts Comment at 2 (noting that “robust methods” for “evaluating AI systems and assessing risk . . . can help regulators verify safety and help AI developers build trust with other stakeholders.”); Global Partners Digital Comment at 17 (“[A] central element of any accountability regime should be addressing the information asymmetries in order to enhance the external stakeholder assessment and the authority oversight of the quality of the evaluation performed of the AI system.”). See also Mozilla OAT Comment at 7 (“Much of the regulatory requirements for internal auditors or professional audit actors is an enforcement of some degree of visibility or oversight on their internal assessment processes and outcomes, which currently remain relatively obscure to external stakeholders, including regulators and the public.”).