Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Food and Drug Regulation

March 27, 2024
Earned Trust through AI System Assurance

Another potentially useful accountability model suggested by commenters can be found in health-related regulatory frameworks such as the FDA’s.358 FDA regulates some AI systems as medical devices. To help medical device manufacturers who are developing AI-enabled devices, “[i]t publishes best practices for AI in medical devices, documents commercially available AI-enabled medical devices, and has promised to perform relevant pilots and advance regulatory science in its AI action plan.”359

Beyond that, commenters pointed to the FDA requirement that medical device manufacturers prepare premarket submissions for FDA review prior to marketing the device, where the requirements for premarket submissions are generally dependent on the level of risk associated with their device. Devices are classified into three categories (Class I, II, III). Regulatory controls increase from Class I to Class III. Most Class I devices are exempt from premarket review, while most Class II devices require submission of a premarket notification (“510(k)”). Most Class III devices require premarket approval.360 One commenter suggested that AI policy follow an analogous risk classification, with regulatory burdens of pre-market controls and disclosure applying to the highest risk products.361

A model for premarket notification for AI systems, such as the FDA’s model for some Class I and most Class II medical devices encompassing premarket notification and FDA review, could prove instructive for limited risk AI systems and deployments, and would allow for some degree of regulatory oversight and reduction of harm. On the other hand, a premarket notification model would likely create regulatory burden, potentially slowing and even disincentivizing development.362

The FDA further has in place an exemplary adverse incident database that could be instructive for AI system accountability.363 This system is similar to the Federal Aviation Administration’s Aviation Safety Reporting System; both collect safety incidents for transparency, review, and risk management of already deployed systems. In the AI context, a similar reporting structure would enable users and subjects of AI systems to recognize and report adverse incidents, as discussed in the AI System Disclosures section. One risk is the possibility of over-reporting if parameters are not carefully defined and the reporting platform is not well-managed. Regulatory oversight or coordination would help to arrange this kind of reporting function.

Additional accountability models overseen by the FDA include requirements for evidence-based drug testing and clinical trials, as well as disclosure of residual risk in the form of side effects.364 Finally, the FDA provides guidance for the labeling of AI systems deployed within its remit, and one commenter argued that requiring a form of marketing approval and similar recommendations would support “a more transparent understanding of how these systems operate.”365 These oversight mechanisms, which require both premarket review and post-market reporting, should be considered in the context of AI accountability, at least for high-risk systems, models, and uses.

 


358 See, e.g., Carnegie Mellon University Comment at 3; Unlearn.AI Comment at 2.

359 Alex Engler, The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment, Brookings Institute (April 25, 2023) (citing to FDA efforts). See also The Pew Charitable Trusts, How FDA Regulates Artificial Intelligence in Medical Products (Aug. 5, 2021).

360 Food and Drug Administration, How to Study and Market Your Device (September 2023).

361 Grabowicz et al., Comment at 6. See also Andrew Tutt, An FDA for Algorithms, 69 Admin. L. Rev. 83 (2017) (presenting a general argument about the analogy between FDA regulation and algorithmic risk management).

362 See Grabowicz et al., Comment at 6.

363 See Raji, et al, Outsider Oversight, supra note 253 at 561.

364 ForHumanity Comment at 4; Carnegie Mellon University Comment at 4.

365 Grabowicz et al., Comment at 4.