Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Recognize Potential Harms and Risks

March 27, 2024
Earned Trust through AI System Assurance

Many commenters, especially individual commenters, expressed serious concerns about the impact of AI. AI system potential harms and risks have been well-documented elsewhere.33 The following are representative examples, which also appeared in comments:

  • Inefficacy and inadequate functionality.
    • Inaccuracy, unreliability, ineffectiveness, insufficient robustness.
    • Unfitness for the use case.
  • Lowered information integrity.
    • Misleading or false outputs, sometimes coupled with coordinated campaigns.
    • Opacity around use.
    • Opacity around provenance of AI inputs.
    • Opacity around provenance of AI outputs.
  • Safety and security concerns.
    • Unsafe decisions or outputs that contribute to harmful outcomes.
    • Capacities falling into the hands of bad actors who intend harm.
    • Adversarial evasion or manipulation of AI.
    • Obstacles to reliable control by humans.
    • Harmful environmental impact.
  • Violation of human rights.
    • Discriminatory treatment, impact, or bias.
    • Improper disclosure of personal, sensitive, confidential, or proprietary data.
    • Lack of accessibility.
    • The generation of non-consensual intimate imagery of adults, and child sexual abuse material
    • Labor abuses involved in the training of AI data.
  • Impacts on privacy.
    • Exposure of non-public information through AI analytical insights
    • Use of personal information in ways that are contrary to the contexts in which they are collected
    • Overcollection of personal information to create training datasets or to unduly monitor individuals (such as workers and trade unions.
  • Potential negative impact to jobs and the economy.
    • Infringement of intellectual property rights.
    • Infringements on the ability to form and join unions. 
    • Job displacement, reduction, and/or degradation of working conditions, such as increased monitoring of workers and the potential mental and physical health impacts.
    • Undue concentration of power and economic benefits.

Individual commenters reflected misgivings in the American public at large about AI.34 Three major themes emerged from many of the individual comments:

  • The most significant by the numbers was concern about intellectual property. Nearly half of all individual commenters (approximately 47%) expressed alarm that generative AI35 was ingesting as training material copyrighted works without the copyright holders’ consent, without their compensation, and/or without attribution. They also expressed worries that AI could supplant the jobs of creators and other workers. Some of these commenters supported new forms of regulation for AI that would require copyright holders to opt-in to AI system use of their works.36
  • Another significant concern was that malicious actors would exploit AI for destructive purposes and develop their own systems for those ends. A related concern was that AI systems would not be subject to sufficient controls and would be used to harm individuals and communities, including through unlawfully discriminatory impacts, privacy violations, fraud, and a wide array of safety and security breaches.
  • A final theme concerned the personnel building and deploying AI systems, and the personnel making AI policy. Individual commenters questioned the credibility of the responsible people and institutions and doubted whether they had sufficiently diverse experiences, backgrounds, and inclusive practices to foster appropriate decision-making.

Potential AI system risks and harms inform NTIA’s consideration of accountability measures. AI system developers and deployers should be responsible for managing the risks of their systems. As AI systems multiply and diffuse into society and the marketplace, customers, workers, consumers, and those affected by AI need assurance that these systems work as claimed and without causing harm. This is especially important for high-risk systems that are rights-impacting or safety-impacting.

 

 


33 Many of these risks are recognized in the AI EO, the AIBoR, and in the Office of Management and Budget, Proposed Memorandum for the Heads of Executive Departments and Agencies, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” (Nov. 2023) at 24-25.

34 See Alec Tyson and Emma Kikuchi, Growing Public Concern About the Role of Artificial Intelligence in Daily Life, Pew Research Center (Aug. 28, 2023).

35 “The term 'generative AI' means the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.” AI EO at Sec. 3(p).

36 Stakeholders are deeply divided on some of these policy issues, such as the implications of “opt-in” or “opt-out” systems, or compensation for authors, which are part of the U.S. Copyright Office’s inquiry and USPTO ongoing work. This report recognizes the importance of these issues to the overall risk management and accountability framework without touching on the merits.