Building an Ecosystem for AI Accountability
AI Accountability Policy
Information Society Project at Yale Law School
Remarks of Alan Davidson
Assistant Secretary of Commerce for Communications and Information
National Telecommunications and Information Administration
New Haven, Conn.
March 27, 2024
As prepared for delivery
Building an Ecosystem for AI Accountability
At NTIA, our goal is to make sure important technologies – from broadband to spectrum to emerging innovations like AI – are developed in the service of people and progress.
Today, there is no better example of that challenge than the conversation around machine learning and artificial intelligence.
Responsible AI innovation can – and will – bring enormous benefits to people. It is going to transform every corner of our economy, from advances in medicine to precision agriculture.
But we will only realize the promise of AI if we also address the serious risks it raises today. Those include concerns about safety, security, privacy, discrimination and bias. Risks from disinformation. Impact on the labor market. These are not hypothetical, long-term risks; these are risks that exist right now.
For these reasons and more, there is a strong sense of urgency across the Biden Administration – and among governments around the world – to engage on these issues.
At the federal level, President Biden's AI Executive Order is the most significant government action to date on AI. It brings the full capabilities of the U.S. government to bear in promoting innovation and trust in AI.
The Administration has also secured commitments from some of the most impactful AI developers. And we are working with our international allies around the world to create a unified approach to dealing with AI risk globally and promoting innovation at home.
The Commerce Department is playing a leading role in the Administration's AI work. The Department is focusing on safety, security, privacy, innovation, equity, and intellectual property related concerns. Our colleagues at NIST are standing up a new AI Safety Institute to do the technical work required to address the full spectrum of AI-related risks. The Patent & Trademark Office is exploring copyright issues.
At NTIA, we are doing our part as well.
We are deeply involved in the Commerce Department's AI policy work and international efforts, including the G7's Code of Conduct for AI developers.
The Executive Order also gave us an important assignment: Developing policy recommendations around widely available model weights for dual-use foundation models. Today, open models are some of the most important and consequential pieces of the AI ecosystem. And there are both risks and benefits associated with them. Our comment period closes today – there's still time to submit – and we will have a report to the president in July with our recommendations on how to both protect safety and promote innovation. Stay tuned.
But what brings us together today is an initiative we started over 18 months ago, before ChatGPT was a household name: AI Accountability.
We set out to answer an important question: If we want responsible innovation and trustworthy AI, how do we hold AI systems — and the entities and individuals that develop, deploy, and use them — accountable? How do we ensure that they are doing what they say? For example, if an AI system claims to keep data private, or operate securely, or avoid biased outcomes – how do we ensure those claims are true?
Today, I'm proud to announce our contribution to this debate: NTIA's AI Accountability Policy Report identifies policies and investments that will help to create trust that AI systems work as claimed – and without causing harm.
The Report calls for improved transparency into AI systems, independent evaluations, and consequences for imposing risks.
One key recommendation: The government ought to require independent audits of the highest-risk AI systems – such as those that directly impact physical safety or health, for example.
To do that, we will need to build an ecosystem around AI accountability and auditing that starts with greater transparency into how models and systems work, runs through reviews and audits of AI performance, and includes real consequences.
Think of the financial auditing system. We have a whole ecosystem around financial audits. We can tell if a company actually has the money it claims because we have a broadly accepted set of accounting and compliance practices. And we have a system of accrediting auditors and holding them responsible.
That is the kind of system we need to build for AI.
And if we can build it, AI accountability will unleash the potential of this technology by helping developers and deployers show that their systems work as intended. This will in turn boost public – and marketplace – confidence in these tools.
In conclusion, safe and trustworthy AI innovation is an ambitious effort.
But this is about more than AI policy and machine learning. This is also an early test of our ability to govern these AI systems and other new technologies like them.
The AI revolution is the kind of big technological change that will impact all parts of our society. But there will be more changes like it to come. And it will test whether we can develop the tools to manage those changes. Our ability to respond to new technology matters to both peoples' lives and to our society as a whole.
If we can't get this right, we risk exacerbating the long-standing inequities in our society, emboldening authoritarians, and creating greater concentrations of economic power in our society.
But if we get this right, the AI revolution will be about creating economic opportunity and new jobs. Improving equity at home. Promoting human rights and fundamental freedoms around the world. Tackling the big challenges facing our planet.
We have the opportunity to make decisions right now that will make sure this technology benefits people and human progress.
This is our moment.The decisions we make now can lead us to a world where technology works in service of a more open, free, equitable and just society.
Together, I know we can build that better version of our future. Thank you.