Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Liability Rules and Standards

March 27, 2024
Earned Trust through AI System Assurance

As a threshold matter, we note that a great deal of work is being done to understand how existing laws and legal standards apply to the development, offering for sale, and/or deployment of AI technologies.

Some federal agencies have taken positions within their respective jurisdictions. In a joint statement, for instance, the Federal Trade Commission, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Consumer Financial Protection Bureau stated that “[e]xisting legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” 286 For example, the FTC has taken action against companies that have engaged in allegedly deceptive advertising about the capabilities of algorithms.287 In some cases, the FTC has obtained relief including the destruction of algorithms developed using unlawfully obtained data.288 Moreover, the Consumer Financial Protection Bureau has made clear that the requirement to provide explanations for credit denials applies to algorithmic systems.289 The Equal Employment Opportunity Commission has issued technical assistance and provided additional resources intended to educate various stakeholders about compliance with federal civil rights laws when using algorithmic tools for employment-related decisions.290 Other agencies are examining AI-related legal issues, such as the work underway at the Copyright Office and USPTO concerning intellectual property and the Department of Labor concerning labor protections. The courts are also examining a broad range of issues, as are industry and civil society groups.

Nevertheless, the comments evinced a need for more clarity on the precise application of existing laws and the potential contours of new laws in the AI space to benefit everyone along the AI value chain, including consumers, customers, users, researchers, auditors, investors, creators, manufacturers, distributors, developers, and deployers.291 The National Cybersecurity Strategy attempted to do this in the cyber context by laying out, in broad strokes, a preferred allocation of liability and agenda to incentivize better cybersecurity practices.292 How AI liability should operate is an issue largely beyond the scope of this Report, and will undoubtedly be worked out in courts, agencies, and legislatures over time.293 It is also the case that the European Commission has proposed adopting a bespoke AI liability regime;294 if adopted, this regime could have impacts on risk mitigation and allocation outside of Europe as well.

The record and research surface needs for more clarity on AI related liability, including on the following interrelated issues:

  • Who should be legally responsible for harms stemming from AI systems and how should such responsibility be shared among key players? What is the place of strict or fault-based liability for harms caused by AI? How should ex ante AI regulation or best practices interact with ex post liability? Should auditors be liable for faulty audits, not only as service providers to clients, but also as public fiduciaries? Should some AI actors bear a larger share of the responsibility than others based on their relative abilities to identify and mitigate risks flowing from AI models and/or systems?295
  • Are the various liability frameworks that already govern AI systems (e.g., in civil rights and consumer protection law, labor laws, intellectual property laws, contracts, etc.) sufficient to address harms or are new laws needed to respond to any unique challenges?296 What is the influence and impact, if any, that external legal regimes—including the European Union’s AI Act and AI Liability Directive —might have on state and federal liability systems?297
  • How should liability rules avoid stifling bona fide research, accountability efforts, or innovative uses of AI? What safeguards, safe harbors, or liability waivers for entities that undertake research and trustworthy AI practices, including adverse incident disclosure, should be considered?

AI accountability inputs can assist in the development of liability regimes governing AI by providing people and entities along the value chain with information and knowledge essential to assess legal risk and, as needed, exercise their rights.298 It can be difficult for those who have suffered AI-mediated employment discrimination, financial discrimination, or other AI system-related harms to bring a legal claim because proof, or even recognition, that an AI system led to harm can be hard to come by; thus, even if an affected party could, in theory, bring a case to remedy a harm, they may not do so because of information and knowledge barriers.299 Accountability inputs can assist people harmed by AI to understand causal connections, and, therefore, help people determine whether to pursue legal or other remedies.300

As a comment from twenty-three state and territory attorneys general stated, “[b]y requiring appropriate disclosure of key elements of high-risk AI systems, individuals can be empowered to decide what systems are fair and adhere to critical due process norms.”301 AI accountability inputs can make it easier to bring cases and vindicate interests now or in the future.302 At the same time, entities that may be on the other end of litigation (e.g., AI developers and deployers alleged to have caused or contributed to harm) can also benefit from more information flow about defensible processes.303

The creation of safe harbors from liability is relevant to AI accountability, whether the one sheltered in that harbor is an AI actor or an independent researcher. The Administration's National Cybersecurity Strategy, for example, recommends the creation of safe harbors in connection with new liability rules for software.304 A small minority of commenters addressed the safe harbor issues. Some expressed doubt that safe harbors for AI actors in connection with AI system-related harms would be appropriate.305 A number of commenters argued that researchers (or a defined class of them) and perhaps some auditors should enjoy a safe harbor from various kinds of liability in connection with bona fide efforts to evaluate AI systems.306 Another approach related to a safe harbor is to create regulatory sandboxes for high-risk AI systems so that AI actors and regulators can learn about AI system risks in a controlled environment for a limited period of time, without unduly exposing the public to AI risks or the AI actors to regulatory risks.307 The OECD has a workstream related to this topic.308 A safe harbor might also be considered to facilitate safety-related information-sharing among companies. These options should be thoroughly examined with input not only from direct safe harbor beneficiaries, but also from affected individuals and communities.

 

 


286 Joint Statement on Enforcement Efforts, supra note 11, at 1.

287 See, e.g., Complaint, FTC v. Lasarow et al. (2015) at 4, 9 (alleging deception, where defendants claimed to use one or more mathematical algorithms to measure specific characteristics of skin moles from digital images captured by a consumer's mobile device in order to detect melanoma). The FTC eventually reached a settlement with the defendants.  See Federal Trade Commission, “Melanoma Detection” App Sellers Barred from Making Deceptive Health Claims (August 13, 2015); Federal Trade Commission, FTC Cracks Down on Marketers of “Melanoma Detection” Apps (February 23, 2015). See also U.S. Department of Justice, Justice Department and Meta Platforms Inc. Reach Key Agreement as They Implement Groundbreaking Resolution to Address Discriminatory Delivery of Housing Advertisements (January 9, 2023) (Fair Housing Act settlement requiring Facebook to change its advertisement delivery system algorithm).

288 See, e.g., Final Order, In the Matter of Cambridge Analytica, LLC, FTC Docket No. 9383 (2019), at 4; Decision, In the Matter of Everalbum, FTC Docket No. C-4743 (2022), at 5. See also FTC v. Ring LLC, No. 1:23-cv-1549 (D.D.C. 2023) (proposed stipulated order).

289 See Consumer Financial Protection Bureau, Consumer Financial Protection Circular 2022-03 (May 26, 2022).

290 See Equal Employment Opportunity Commission, Artificial Intelligence and Algorithmic Fairness Initiative; Equal Employment Opportunity Commission, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (May 18, 2023); Equal Employment Opportunity Commission, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022).

291 See, e.g., Google DeepMind Comment at 25 (stating that policymakers should “clarify[] liability for misuse/abuse of AI systems by various participants—researchers and authors, creators (including open-source creators) of general-purpose and specialized systems, implementers, and end users[.]”); Open MIC Comment at 8 (“The lack of clarity regarding liability for AI-related harms puts both investors and rights-holders at risk.”); Public Knowledge Comment at 2 (“We must address uncertainty about where liability lies for AI-driven harms to ensure that stakeholders at every phase of the AI lifecycle are contributing responsibly to the overall health of our AI ecosystem.”); Georgetown University Center for Security and Emerging Technology Comment at 15 (“[T]he liability of developers for harms caused by their AI models should be clarified to avoid entirely unregulated spaces.”); STM Comment at 3 (“At a minimum, clarity and transparency are required in the use of IP and copyright, and as part of any liability regime. AI systems can use huge volumes of copyright materials in the training process and as part of any commercial deployment, therefore transparency obligations will be necessary to enable rights holders to trace copyright infringements in content ingested by AI systems.”). Some commenters suggested that the Federal government should provide guidance on legal regimes, which could influence liability frameworks. See, e.g., US Telecom Comment at 3 (“Additionally, there is a role for the Federal government to address the emerging problem of inconsistent state laws [related to AI accountability] in an economically sensible manner.”); AFL-CIO Comment at 4 (“The Federal government should construct regulatory structures that preclude AI systems from being deployed if they have the potential to violate U.S. laws and regulations, undermine democratic values, violate people’s rights, including labor rights and employment law). Cf. HRPA Comment at 7 (“We believe that the Federal government should coordinate its efforts to promulgate guidelines and requirements on artificial intelligence in the employment context. Where possible, we encourage NTIA to look for ways to promote consistency between Federal and state efforts.”). Some commenters also raised more discrete topics that might also be appropriate to consider in the context of developing clearer liability rules for harms stemming from AI systems, such as who is responsible for contributing to remedies. See, e.g., Global Partners Digital Comment at 8 (“The liability regime established by the accountability regime should account for the way in which developers of foundational models and implementers should contribute to remedy in case of harm.”). One commenter suggested the adoption of specific statutes imposing criminal liability for the misuse of AI. Ellen S. Podgor Comment at 1.

292 The White House, National Cybersecurity Strategy (2023) at 21 (“The Administration will work with Congress and the private sector to develop legislation establishing liability for software products and services. Any such legislation should prevent manufacturers and software publishers with market power from fully disclaiming liability by contract, and establish higher standards of care for software in specific high-risk scenarios.”).

293 See, e.g., DLA Piper Comment at 10 (“Courts shape precedent around accountability for harm and influence developer behavior through risk of liability suits or fines for issues like injuries, discrimination, violations of due process, etc.”).

294 European Commission, Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (Reference COM(2022) 496), EUR-Lex (September 28, 2022), at 2..

295 See, e.g., Anthropic Comment at 7 (“Liability regimes that hold model developers solely responsible for all potential harms could hinder progress in AI.”); Campaign for AI Safety Comment at 3-4 (“Legislators should pass laws that clarify the joint legal culpability of AI labs, AI providers and parties that employ AI for AI harms” and analogizing to “polluter pays” and manufacturer liability for product safety defects); The Future Society Comment at 12 (“Transferring absolute liability to third-party auditors would erroneously presuppose their capability to audit for novel risks. . . . Shared liability between developers, deployers, and auditors encourages all involved parties to maintain high standards of diligence, enhances effective risk management, and fosters a culture of accountability in AI development and deployment.”); Global Partners Digital at 3 (arguing that “[l]iability should be clearly and proportionately assigned to the level in which those different entities are best positioned to prevent or mitigate harm in the AI system performance.”); Cordell Institute for Policy in Medicine & Law Comment at 11 (“[P]olicymakers should consider vicarious liability and personal consequences for malfeasance by corporate executives”); ACT | The App Association Comment at 2 (“Providers, technology developers and vendors, and other stakeholders all benefit from understanding the distribution of risk and liability in building, testing, and using AI tools. ..[T]hose in the value chain with the ability to minimize risks based on their knowledge and ability to mitigate should have appropriate incentives to do so”); Georgetown University Center for Security and Emerging Technology Comment at 1 (“Due to the large variety of actors in the AI ecosystem, we recommend designing mechanisms that place clear accountability on the actors who are most responsible for, or best positioned to, influence a certain step in the value chain”). See also Salesforce Comment at 8 (“AI developers like Salesforce often create general customizable AI tools, whose intended purpose is low-risk, and it is the customer’s responsibility (i.e., the AI deployer) to decide how these tools are employed. . . .It is the customer, and not Salesforce, that knows what has been disclosed to the affected individual, and what the risk of harm is to the affected individual.”).

296 See, e.g., Senator Dick Durbin Comment at 2 (“[W]e must also review and, where necessary, update our laws to ensure the mere adoption of automated AI systems does not allow users to skirt otherwise applicable laws (e.g., where the law requires ‘intent.’)”); ICLE Comment at 15 (“[T]he right approach to regulating AI is not the establishment of an overarching regulatory framework, but a careful examination of how AI technologies will variously interact with different parts of the existing legal system”); Open MIC Comment at 8 (“Legal experts are divided regarding how AI-related harms fit into existing liability regimes like product liability, defamation, intellectual property, and third-party-generated content.”); CDT Comment at 33 (“The greatest challenge in successfully enforcing a claim against AI harms under existing civil rights and consumer protection laws is that the entities developing and deploying AI are not always readily recognized as entities that traditionally have been covered under these laws. This ambiguity helps entities responsible for AI harms claim that existing laws do not apply to them.”); HRPA Comment at 5 (“The use of technology in the employment context is already subject to extensive regulation which should be taken into consideration when developing any additional protections. In the United States alone, Federal and state laws dealing with anti-discrimination, labor policy, data privacy, and AI-specific issues affect the use of AI in the employment context.”); Georgetown University Center for Security and Emerging Technology Comment at 10 (noting that “[p]roduct liability law provides inspiration for how accountability should be distributed between upstream companies, downstream companies and end users.”); Boston University and University of Chicago Researchers Comment at 1-2 (arguing that accountability mechanisms are important for “(a) new or modified legal and regulatory regimes designed to take into account assertions, evidence and similar information provided by AI developers relevant to intended or known users of their products, and (b) existing regimes such as product liability, consumer protection, and other laws designed to protect users and others against harm.”).

297 See, e.g., SaferAI Comment at 2 (“We believe that the article 28 of the EU AI Act parliament draft lays out useful foundations on which the US could draw upon in particular regarding the distribution of the liability along the value chain to make sure to not hamper innovation from SMEs, which is one of EU’s primary concerns.”); Association for Intelligent Information Management (AIIM) Comment at 3 (“This approach – classifying AI into different categories and establishing policy accordingly – aligns with the European Union’s AI Act, which is currently working its way through their legislative processes. While AIIM is not indicating its support for this legislation nor advocating for the U.S. government to adopt similar policy, the premise is commendable.”); Georgetown University Center for Security and Emerging Technology Comment at 6 (“Accountability mechanisms should make sure to clearly define what different actors in the value chain are accountable for, and what information sharing is necessary for that party to fulfill their responsibilities. For example, the EU parliament’s proposal for the AI Act requires upstream AI developers to share technical documentation and grant the necessary level of technical access to downstream AI providers such that the latter can assess the compliance of their product with standards required by the AI Act.”); ICLE Comment at 9-11 (criticizing the proposed EU AI Act’s “broad risk-based approach.”).

298 While accountability inputs can play an important role in the assigning of liability, we note that these inputs do not in themselves supplant appropriate liability rules. See, e.g., The Future Society Comment at 8 (“Third-party assessment and audits must not be perceived as silver bullets. . . . Furthermore, external audits, in particular, may be subject to liability-washing (companies seeking to conduct external audits with the ulterior motivation of evading liability.”); Cordell Institute for Policy in Medicine & Law Comment at 3 (“Governance of AI systems to foster trust and accountability requires avoiding the seductive appeal of ‘AI half-measures’—those regulatory tools and mechanisms like transparency requirements, checks for bias, and other procedural requirements that are necessary but not sufficient for true accountability.”); Boston University and University of Chicago Researchers Comment at 2 (“[A]ccountability and transparency mechanisms are a necessary but not sufficient aspect of AI regulation. . . . To be effective, a regulatory approach for AI systems must go beyond procedural protections to include substantive, non-negotiable obligations that limit how AI systems can be built and deployed.”). When AI transparency and system evaluations contribute additional information and knowledge that could be used to bring legal cases, the challenge may remain on how to apply legal concepts to modern use situations involving AI even when people agree a law may be applicable. See, e.g., Lorin Brennan, “AI Ethical Compliance is Undecidable”, 14 Hastings Sci. & Tech. L.J. 311, 323-332 (2023) (arguing that it is “unsettled how applicable law should be applied” in the context of AI ethical compliance).

299 See, e.g., CDT Comment at 34 (“Due to the lack of transparency in AI uses, the plaintiff may not have the information needed to even establish a prima facie case. They may not even know whether or how an AI system was used in making a decision, let alone have the information about training data, how a system works, or what role it plays in order to offer direct evidence of the AI user’s discriminatory intent or to discover what similarly situated people experienced due to the AI.”); Public Knowledge Comment at 12 (“Unfortunately, identifying the party responsible for introducing problems into the AI system can be challenging, even though the resulting harms may be evident. While much has been written on different legal regimes and their effectiveness in addressing AI-related harms, less attention has been given to determining the specific entities in the chain of development and use who bear responsibility.”).

300 See, e.g., OECD, Recommendation of the Council on Artificial Intelligence, Section 1.3 (2019). Cf. Danielle Citron, “Technological Due Process,” 85 Wash. U. L. Rev. 1249, 1253-54 (2008) (“Automation generates unforeseen problems for the adjudication of important individual rights. Some systems adjudicate in secret, while others lack recordkeeping audit trails, making review of the law and facts supporting a system’s decisions impossible. Inadequate notice will discourage some people from seeking hearings and severely reduce the value of hearings that are held.”).

301 Twenty-three Attorneys General Comment at 3. See also AI & Equality Comment at 2 (“[E]nabling AI-based systems with adequate transparency and explanation to affected people about their uses, capabilities, and limitations amounts to applying the due process safeguards derived from constitutional law in the analogue world to the digital world.”).

302 See, e.g., AI & Equality Comment at 2 (“[T]ransparency and explainability mechanisms play an important role in guaranteeing the information self-determination of individuals subjected to automated decision-making, enabling them to access and understand the output decision and its underlying elements, and thus providing pathways for those who wish to challenge and request a review of the decision.”) (emphasis added); CDT Comment at 22 (“[A] publicly released audit provides a measure of transparency, while transparency provides information necessary to determine whether liability should be imposed.”).

303 See, e.g., AIIM Comment at 3 (“[Organizations] are reluctant to implement new technology when they do not know their liabilities, don’t know if or how they will be audited or who will be auditing them, and are unclear about who may have access to their data, among other things. . . .For instance, insurance companies have had AI for years that can analyze images of crashes or other incidents to help make determinations about fault or awards, but companies have been afraid to use it out of fear of the potential liability if an AI-made decision is contested.”); Public Knowledge Comment at 11 (noting that understanding liability “is especially important to ensure that harms can be adequately addressed and also so that academic researchers, new market entrants, and users can engage with AI with clarity about their responsibilities and confidence surrounding their risk.”); DLA Piper Comment at 3 (“Undertaking accountability mechanisms reduces potential liabilities in the event of accidents or AI failures by showing diligent governance and responsibility were exercised.”); CDT Comment at 29 (“One of the key ways of ensuring accountability is the promulgation of laws and regulations that set standards for AI systems and impose potential liability for violations. Such liability both provides for redress for harms suffered by individuals and creates incentives for AI system developers and deployers to minimize the risk of those harms from occurring in the first place.”).

304 See The White House, supra note 292, at 20-21 (Strategic Objective 3.3).

305 See Senator Dick Durbin Comment at 2 (“And, perhaps most importantly, we must defend against efforts to exempt those who develop, deploy, and use AI systems from liability for the harms they cause.”); Global Partners Digital Comment at 10 (“Accountability needs to be embedded throughout the whole value chain, or more specifically, throughout the entire lifecycle of the AI system. . . . [L]iability waivers do not seem appropriate, and there is a clear need for a dynamic distribution of the legal liability in case of harm.”).

306 See, e.g., Engine Advocacy Comment at 4 (citing approvingly government safe harbor programs to encourage compliance, such as the FTC COPPA Safe Harbor Program and the HHS breach safe harbor program); Boston University and University of Chicago Researchers Comment at 8 (“[W]e encourage the enactment of legal protection for researchers seeking to study algorithms[.] . . .”); ACT-IAC Comment at 11 (supporting providing external auditors maximum system access, including through appropriate security clearances, coupled with “liability waivers and the ability to publish the review[s] externally—to the extent that clearance allows—to ensure transparency.”); Mozilla OAT Comment at 7 (“For [data] access, external auditors need safe harbors against retaliation for the publication of unfavorable results and custom tooling for data collection.”); AI Policy and Governance Working Group Comment at 3 (“The Federal government should consider the establishment of narrowly-scoped ‘safe harbor’ provisions for industry and researchers, designed to reasonably assure that entities participating in good faith auditing exercises are not subjected to undue liability risk or retaliation”).

307 See supra at note 69. See also Jon Truby, Rafael Dean Brown, Imad Antoine Ibrahim, and Oriol Caudevilla Parellada. “A Sandbox Approach to Regulating High-Risk Artificial Intelligence Applications.” European Journal of Risk Regulation, Vo. 13, No. 2, at  270–94 (2022) (arguing for a robust sandbox approach to regulating high-risk AI applications as a necessary complement to strict liability regulation); European Parliament, The Artificial Intelligence Act and Regulatory Sandboxes (“regulatory sandboxes generally refer to regulatory tools allowing businesses to test and experiment with new and innovative products, services or businesses under supervision of a regulator for a limited period of time. As such, regulatory sandboxes have a double role: 1) they foster business learning, i.e., the development and testing of innovations in a real-world environment; and 2) support regulatory learning, i.e., the formulation of experimental legal regimes to guide and support businesses in their innovation activities under the supervision of a regulatory authority. In practice, the approach aims to enable experimental innovation within a framework of controlled risks and supervision, and to improve regulators' understanding of new technologies.”) (internal emphasis omitted).

308 OECD, “Regulatory sandboxes in artificial intelligence,” OECD Digital Economy Papers, No. 356, (2023) (recommending that governments “consider using experimentation to provide a controlled environment in which AI systems can be tested and scaled up”). See alsoRegulatory sandboxes can facilitate experimentation in artificial intelligence”.