FACT SHEET: NTIA Urges Policy Changes to Boost Accountability and Trustworthiness in Artificial Intelligence Systems
Today, NTIA is issuing landmark recommendations to ensure that artificial intelligence (AI) systems do what they claim – without causing harm. The AI Accountability Policy Report calls for improved transparency into AI systems, independent evaluations, and consequences for imposing unacceptable risks or making unfounded claims.
As part of the Biden-Harris Administration’s work to seize the promise of AI while managing its risks, NTIA makes eight sets of policy recommendations to help support safe, secure and trustworthy AI innovation.
Guidance:
- Audits and auditors: The federal government should work with stakeholders to create guidelines for AI audits and auditors.
- This includes refining guidance on the design of audits, designing evaluation standards for audits, and creating auditor certifications.
- Disclosure: The federal government should work with stakeholders to improve standard information disclosures.
- Greater transparency is needed on AI system models, architecture, training data, input and output data, performance, limitations, appropriate use, and testing.
- This could be achieved via model cards or a disclosure like a nutrition label.
- Liability standards: The federal government should work with stakeholders to make recommendations about applying existing liability rules and standards to AI systems - including who is held accountable for AI system harms.
- As the application of existing liability standards gains clarity, the federal government should also explore supplementary liability standards where those are needed.
Support
- People and tools: The federal government should invest in the resources necessary to meet the national need for independent evaluations of AI systems, including by further supporting the U.S. AI Safety Institute and by establishing and funding a National AI Research Resource. Resources of particular importance include:
- Datasets to test for equity, efficacy, and other attributes and objectives
- Computing and cloud infrastructure required to conduct rigorous evaluations
- Independent evaluation and red-teaming support, such as through prizes, bounties, and research support
- Workforce development
- Research: Federal government agencies should foster the creation ofreliable and widely applicable tools to assess when AI systems are being used, on what materials they were trained, and what capabilities and limitations they have.
Regulations
- Audits and other independent evaluations: Federal agencies should require independent audits and regulatory inspections of high-risk AI model classes and systems – such as those that present a high risk of harming rights or safety.
- For some models and systems, that process should take place both before release or deployment, and on an ongoing basis.
- Boosting government capacity across sectors: The federal government should strengthen its capacity to address risks and practices related to AI across sectors of the economy. Tasks could include:
- Maintaining registries of high-risk AI deployments, AI adverse incidents, and AI system audits.
- Providing evaluation, certification, documentation, and disclosure oversight, as needed.
- Contracting: The federal government should require that government suppliers, contractors, and grantees adopt sound AI governance and assurance practices.
NTIA will work with private sector stakeholders to build out accountability structures, and work with other parts of government to support the policy recommendations put forward in the report.