Artificial Intelligence
The emergence of artificial intelligence (AI) has brought about a new era of technology and the potential to revolutionize industries, increase productivity, and to improve the quality of life for people across the world.
AI can play a vital role in healthcare by enabling the early detection of diseases, providing personal treatment options, and improving patient outcomes. In finance, AI can aid in detecting fraudulent activities, managing risks, and providing customized investment advice. In education, AI can personalize learning for individual students by assessing their strengths and weaknesses and adapting curricula to meet their needs. And in agriculture, it can help farmers make data-driven decisions to optimize crop yields, reduce carbon footprints, and increase efficiency.
While AI offers many potential advantages, it also poses harms and it is essential that these be accounted for both before and after the deployment of AI systems.
The automation capabilities of AI and machine learning have the potential to displace jobs and cause significant shifts in the labor market. AI systems can mirror and amplify existing biases and discrimination in society, leading to unfair and unjust outcomes. The vast quantities of data collected and processed by AI systems present high-profile targets for cyber attacks, data breaches, and misuse. AI systems can manipulate individuals and undermine democratic processes. Moreover, insufficient transparency and accountability around AI systems may hurt individuals, communities, and businesses. Finally, there are emergent risks that are not yet clear.
It is important that those who develop and deploy AI systems are accountable for their design and performance. That’s why NTIA is embarking on a new initiative to gather comment and inform policymakers and other stakeholders about what steps might help to ensure these systems are safe, effective, responsible, and lawful.
The Department of Commerce’s National Telecommunications and Information Administration is the President’s principal advisor on information technology and telecommunications policy. In this role, the agency will help develop the policies necessary to verify that AI systems work as they claim – and without causing harm.Our initiative will help build an ecosystem of AI audits, assessments, certifications, and other policies to support AI system assurance and create earned trust.
President Biden has been clear that when it comes to AI, we must both support responsible innovation and ensure appropriate guardrails to protect Americans’ rights and safety. The White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights provides an important framework to guide the design, development, and deployment of AI and other automated systems. The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework serves as a voluntary tool that organizations can use to manage risks posed by AI systems.
Related content
Supporting Sustainable and Secure Data Centers
NTIA Seeks Comments on Supporting U.S. Data Center Growth
WASHINGTON – The Department of Commerce’s National Telecommunications and Information Administration (NTIA) today launched an inquiry into how federal policy can support the growth of U.S. data centers to meet the coming demand from artificial intelligence (AI) and other emerging technologies.
NTIA Supports Open Models to Promote AI Innovation
WASHINGTON – Today, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) issued policy recommendations embracing openness in artificial intelligence (AI) while calling for active monitoring of risks in powerful AI models.
NTIA’s Report on Dual-Use Foundation Models with Widely Available Model Weights recommends the U.S. government develop new capabilities to monitor for potential risks, but refrain from immediately restricting the wide availability of open model weights in the largest AI systems.