Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Increase Federal Government Role

March 27, 2024
Earned Trust through AI System Assurance

A strong sentiment running through both institutional and individual comments was that there should be a significant federal government role in funding, incentivizing, and/or requiring accountability measures. Commenters recommended additional federal funding and/or support for more AI safety research, standards development, the release of standardized datasets for testing, and professional development for auditors. 68 They recommended that government consider providing a regulatory sandbox for entities, under certain conditions, to experiment with responsible AI and compliance efforts free from regulatory risk.69 They urged federal procurement reform, as the National Artificial Intelligence Advisory Committee recommended,70 in order to drive trustworthy AI by adopting rigorous documentation, disclosure, and evaluation requirements.71 As noted above, they argued for mandatory audits and other mandatory AI accountability measures, including a federal role in certifying auditors and setting audit benchmarks, as is customary in other regulatory domains.72

Federal regulatory involvement with accountability measures in other fields, while not directly applicable to AI, may be instructive. In this vein, commenters pointed to precedents such as the Food and Drug Administration (FDA) premarket review for medical devices, the National Highway Traffic Safety Administration auto safety standards, FDA nutrition labels, the Environmental Protection Agency (EPA) ENERGY STAR® labels, the Federal Aviation Administration (FAA) accident examination and safety processes, and the Securities and Exchange Commission (SEC) audit requirements.73

An area of overwhelming agreement in the commentary was the importance of data protection and privacy to AI accountability, with commenters expressing the view that a federal privacy law is either necessary or important to trustworthy and accountable AI.74

As our recommendations elaborate in the Recommendations Section, we support accelerated and coordinated government action to determine the best federal regulatory and non-regulatory approaches to the documentation, disclosure, access, and evaluation functions of the AI accountability chain.

 

 


68 See, e.g., Protofect Comment at 9-10 (“Governments could fund the development of AI auditing standards and infrastructures. …[and] can create incentive programs for businesses to incorporate ethical and accountable practices in their AI systems. This could include tax breaks, grants, or recognition programs for businesses that demonstrate leadership in AI accountability”); Guardian Assembly Comment at 10-11 (focus on incentives (grants, public recognition, staff training incentives); U.S. Chamber of Commerce Comment at 11 (fund STEM education related to AI to increase public trust through NSF); Center for Security and Emerging Technology (CSET) Comment at 13 (“Alongside standards for the audit process itself, standards should include provisions on data access, confidentiality and ‘revolving door’ policies that prevent auditors from working in the industry for a number of years”); BigBear.ai Comment at 24 (“Government bodies can establish regulatory frameworks that promote transparency and require the provision of data necessary for accountability assessments”).

69 Future of Privacy Forum (FPF) Comment at 5 (“NTIA should support the creation of an AI Assessment & Accountability Sandbox to test, assess, and develop guidance for organizations seeking to apply existing rules to novel AI technologies and comply with emerging AI regulations.”); Credo AI Comment at 5 (“For commercial systems, Credo AI recommends creating an ‘assurance sandbox’ where commercial entities can use an iterative process for guideline development with limited indemnity to start. This ‘assurance sandbox’ would trial transparency and mitigation requirements (using voluntary guidelines) with non-financial consequences for violations - essentially a ‘safe harbor’ for sincere mitigation efforts.”); Centre for Information Policy Leadership Comment at 31-32 (“Regulatory sandboxes are important mechanisms for regulatory exploration and experimentation as they provide a test bed for applying laws to innovative products and services in the AI field.”); Stanford Institute for Human-Centered AI (Dr. Jennifer King) Comment at 3 (proposing “regulatory sandboxes for piloting many of these proposed mechanisms to ensure they provide measurable and meaningful results.”); Engine Advocacy Comment at 10-11 (“Regulatory sandboxes allow businesses and regulators to cooperate to create a safe testing ground for products or services. In simple terms, sandboxes allow real-life environment testing of innovative technologies, products, or services, which may not be fully compliant with the existing legal and regulatory framework.”); American Legislative Exchange Council (ALEC) Comment at 10 (“This sandbox framework, already adopted and successful in states like Arizona and Utah, offers a way for regulators to support domestic AI innovation by permitting experimentation of new technologies in controlled environments that would otherwise violate existing regulations.”); Chegg Comment at 4; Business Roundtable Comment at 12. See also. Government of the United Kingdom, Department for Science, Innovation & Technology, Office for Artificial Intelligence, A Pro-Innovation Approach to AI Regulation (Command Paper Number 815), at Sec. 3.3.4 (August 3, 2023), (“Regulatory sandboxes and testbeds will play an important role in our proposed regulatory regime.”).

70 See e.g., National Artificial Intelligence Advisory Committee, Report of the National Artificial Intelligence Advisory Committee (NAIAC), Year 1, at 16-17 (May 2023) (“OMB could guide agencies on the procurement process to ensure that contracting companies have adopted the AI RMF or a similar framework to govern their AI”); Governing AI, supra note 47, at 11.

71 See, e.g., AI Policy and Governance Working Group Comment at 6-7 (“A practical mechanism to consider broadly across the whole of the Federal government would be the uptake and application of a Department of Defense procurement vehicle for an independent evaluator to be procured simultaneously with a contract for an AI tool or system, thus building in a layer of accountability with the necessary infrastructure and funding”); AFL-CIO Comment at 7 (Procurement policies should ensure that AI systems do not harm workers by maintaining good data governance practices and giving workers input on impact assessments); Copyright Clearance Center (CCC) Comment at 4 (“Public procurement should require that companies building and training AI systems maintain adequate records” including management of metadata); Governing AI, supra note 47 at 11 (supporting a requirement that “vendors of critical AI systems to the U.S. Government to self-attest that they are implementing NIST’s AI Risk Management Framework... the U.S. Government could insert requirements related to the AI Risk Management Framework into the Federal procurement process for AI systems”); CDT Comment at 36-37.

72 See, e.g., AI Policy and Governance Working Group Comment at 6 (advocating government “credentialling auditors”); Centre for Information Policy Leadership Comment at 26 (advocating requiring auditor certification for audits of high-risk applications); PWC Comments at A12 (opining that “[t]he lack of AI laws and regulations requiring adherence to specified standards, reporting, and audits is a further impediment to creation of an environment of true AI accountability” and contrasting this situation with federal involvement in financial auditing).

73 See, e.g., Barry Friedman et al., Policing Police Tech: A Soft Law Solution, 37 Berkeley Tech L.J. 701, 742 (2022) (submitted as part of the Policing Project at New York University School of Law’s comment) (“And like an FDA drug label, tech certification labels could come with warnings about the potential risks of any non-certified uses”), and at 706 (“[A] certification scheme, ([like] a “Rated R” movie, “Fair Trade” coffee, or an “Energy Star” appliance), could perform a review of a technology’s efficacy and an ethical evaluation of its impact on civil rights, civil liberties, and racial justice”); Grabowicz et al., Comment at 1 (“[S]imilar mechanisms are used to enforce vehicle safety standards, which in turn encourage car manufacturers to offer better safety features”). See also, e.g., Mark Vickers Comment (individual comment advocating “Borrow[ing] Principles from the Food and Drug Administration”).

74 See, e.g., Data & Society Comment at 7; Google DeepMind Comment at 3; Global Partners Digital Comment at 15; Hitachi Comment at 10; TechNet Comment at 4; NCTA Comment at 4-5; Centre for Information Policy Leadership Comment at 1; Access Now Comment at 3-5; BSA | The Software Alliance Comment at 12; U.S. Chamber of Commerce Comment at 9 (need federal privacy protection); Business Roundtable Comment at 10 (supporting a passage of a federal privacy/consumer data security law to align compliance efforts across the nation); CTIA Comment at 1, 4-7; Salesforce Comment at 9.