Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.


The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Artificial Intelligence

Earned Trust through AI System Assurance

The emergence of artificial intelligence (AI) has brought about a new era of technology and  the potential to revolutionize industries, increase productivity, and to improve the quality of life for people across the world. 

AI can play a vital role in healthcare by enabling the early detection of diseases, providing personal treatment options, and improving patient outcomes. In finance, AI can aid in detecting fraudulent activities, managing risks, and providing customized investment advice. In education, AI can personalize learning for individual students by assessing their strengths and weaknesses and adapting curricula to meet their needs. And in agriculture, it can help farmers make data-driven decisions to optimize crop yields, reduce carbon footprints, and increase efficiency.

While AI offers many potential advantages, it also poses harms and it is essential that these be accounted for both before and after the deployment of AI systems.

The automation capabilities of AI and machine learning have the potential to displace jobs and cause significant shifts in the labor market. AI systems can mirror and amplify existing biases and discrimination in society, leading to unfair and unjust outcomes. The vast quantities of data collected and processed by AI systems present high-profile targets for cyber attacks, data breaches, and misuse. AI systems can manipulate individuals and undermine democratic processes. Moreover, insufficient transparency and accountability around AI systems may hurt individuals, communities, and businesses. Finally, there are emergent risks that are not yet clear.

It is important that those who develop and deploy AI systems are accountable for their design and performance. That’s why NTIA is embarking on a new initiative to gather comment and inform policymakers and other stakeholders about what steps might help to ensure these systems are safe, effective, responsible, and lawful.

The Department of Commerce’s National Telecommunications and Information Administration is the President’s principal advisor on information technology and telecommunications policy. In this role, the agency will help develop the policies necessary to verify that AI systems work as they claim – and without causing harm.Our initiative will help build an ecosystem of AI audits, assessments, certifications, and other policies to support AI system assurance and create earned trust.

President Biden has been clear that when it comes to AI, we must both support responsible innovation and ensure appropriate guardrails to protect Americans’ rights and safety. The White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights provides an important framework to guide the design, development, and deployment of AI and other automated systems. The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework serves as a voluntary tool that organizations can use to manage risks posed by AI systems.


Related content

AI Accountability Policy Report

March 27, 2024
Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere. Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm.
Subscribe to Artificial Intelligence RSS feed