Fund and Facilitate Growth Of The Accountability Ecosystem
Commenters noted that there currently is not an adequate workforce to conduct AI system evaluations, particularly given the demands of sociotechnical inquiries, the varieties of expertise entailed, and supply constraints on the relevant workforce.64 In addition, inadequate access to data and compute (referring to computing power in the AI context), inadequate funding, and incomplete standardization were cited as other barriers to developing accountability inputs.65 Another concern of commenters was that auditors can become captured by the auditees who hire them.66
Recognizing possible deficiencies in the supply, resources, and independence of AI evaluators, NTIA favors more federal support for independent auditing and red-teaming.67 Such support could take the form of facilitating system access, funding education, conducting and funding research, sponsoring prizes and competitions, providing datasets and compute, and hiring into government. At the same time, the federal government should build capacity to conduct evaluations itself and provide a backstop to ensure independent auditors provide adequate assurance. The sequencing and prioritization of these efforts is an urgent question for policymakers.
64 See e.g., Anthropic Comment at 3 (“[R]ed teaming talent currently resides within private AI labs.”); International Association of Privacy Professionals (IAPP) Comment at 2 (“[S]ubstantial gap between the demand for experts to implement responsible AI practices and the professionals who are ready to do so”).
65 See infra Ecosystem Requirements Section.
66 See, e.g., Center for Democracy and Technology Comment at 28 (“auditing firms may be subject to capture by providers since providers may be reluctant to retain auditors that conduct truly independent and rigorous audits as compared to those who engage in more superficial exercises...").
67 See OMD Draft Memo at 22.