Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Human Rights and ESG Assessments

March 27, 2024
Earned Trust through AI System Assurance

Financial accountability models and assurance methods are more mature than accountability mechanisms for human rights and ESG performance. This flexibility is both an asset and a liability when it comes to considering these accountability regimes as models and vehicles for trustworthy AI evaluations.

A principal input for holding entities accountable for human rights harms are human rights impact assessments, “which are grounded in the [United Nations Guiding Principles on Business and Human Rights], a non-binding framework endorsed by the United Nations Human Rights Council in 2011.”347 Folding AI evaluations into human rights impact assessments is one way to ensure that AI evaluations take human rights into account and that human rights evaluations take AI into account.348 As one commenter put it, “there are benefits in using the same methodology and not burdening teams with performing several assessments in parallel.”349 A number of commenters suggested incorporating human rights assessment frameworks into standard review processes across the AI life cycle.350

In the United States, the SEC has adopted rules requiring climate-related disclosures for public companies.351 Companies are incorporating ESG disclosure models in their operations, using measurements from organizations modeled on financial accounting boards, such as the Sustainability Accounting Standards Board.352 There are many other standards and methods deployed in ESG evaluations.353 While ESG disclosure models are not currently designed to evaluate AI’s impact, commenters suggested incorporating AI and data practices more generally into the evaluation.354 For example, respect for individuals’ privacy rights is a human rights issue and at the same time it is a “social impact” issue within bounds of the “S” in ESG.355

There is a risk for ESG evaluations, as well as for AI trustworthiness evaluations, that the goals and standards are too varied for meaningful results. One academic paper describes the problem as follows: “due to the ambiguity of what is being audited, ESG certifications risk becoming ‘cheap talk,’ rubber stamping practices without in fact promoting social responsibility.”356 Some questions may not be answerable. In the ESG context, this might be a question about supply chain responsibility. In the AI context, the question might concern training data provenance and the labor conditions under which AI systems are trained. ESG evaluations have handled this difficulty of answerability by focusing on process, rather than outcomes. In other words, auditees are expected to attest to their best efforts to obtain satisfactory outcomes such as through their own supply chain audits and other measures. The design of AI evaluations might similarly look to appraise processes when outcomes escape measurement.357

The private sector continues to refine and seek ESG framework standardization for evaluations. What the ESG assurance experience might teach is that multi-factored evaluations using a variety of standards may not immediately yield comparable or actionable results. However, the ESG auditing ecosystem has developed rapidly and become more standardized as stakeholders have demanded clarity around ESG performance and governments have required or incentivized better reporting.

 


347 European Center for Not-for-Profit Law and Data & Society, Recommendations for Assessing AI Impacts to Human Rights, Democracy, and the Rule of Law at 4 (2021).

348 See, e.g., The Investor Alliance for Human Rights Comment at 3 (advocating for the creation of “a robust and clear methodology for a human rights impact assessment process” with “specific criteria relevant to AI systems” and that “must be developed with the involvement of digital rights experts.”); AI & Equality Comment at 9 (Government should “consider the legal obligation to adhere to international human rights treaties and United States due process laws when creating AI accountability policy”); David Kaye, Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, United Nations (UN Document Symbol A/73/348) (August 29, 2018), at 20 (“When procuring or deploying AI systems or applications, States should ensure that public sector bodies act consistently with human rights principles. This includes, inter alia, conducting public consultations and undertaking human rights impact assessments or public agency algorithmic impact assessments prior to the procurement or deployment of AI systems.”).

349 Centre for Information Policy Leadership Comment at 7 (also noting that “there is no consensus on how to identify and assess human rights risks and harms and how to do this in an integrated way for all disciplines—AI, privacy security, safety, children’s protection, etc.”).

350 Google DeepMind Comment at 18; The Investor Alliance for Human Rights Comment at 2; Global Partners Digital Comment at 4 (recommending codification of human rights standards for AI).

351 SEC, The Enhancement and Standardization of Climate-Related Disclosures for Investors (Final Rule) (Mar. 6, 2024) (requiring “registrants to provide certain climate related information in their registration statements and annual reports” including “information about a registrant’s climate-related risks that have materially impacted, or are reasonably likely to have a material impact on, its business strategy, results of operations, or financial condition.”).

352 See generally, SASB Standards, SASB Standards and other ESG Frameworks: The Sustainability Reporting Ecosystem; SEC, supra note 354 (proposing for public companies a similar reporting format); Directive (EU) 2022/2464 of the European Parliament and of the Council of 14 December 2022 amending Regulation (EU) No 537/2014, Directive 2004/109/EC, Directive 2006/43/EC and Directive 2013/34/EU, as regards corporate sustainability reporting, OJ L 322 (Dec. 16, 2022) (adopting European Sustainability Reporting Standards that require ESG reporting for companies in the EU starting January 1, 2024).

353 See, e.g., Global Reporting Initiative (GRI), Carbon Disclosure Project (CDP), Task Force on Climate-Related Financial Disclosures (TCFD), and United Nations Sustainable Development Goals (SDG).

354 See e.g., CAQ Comment at 7 (noting that AI safety standards are a predicate for evaluation as part of the ESG process.).

355 See Centre for Information Policy Leadership Comment at 22.

356 Raji et al., Outsider Oversight, supra note 253, at 558. See also Open MIC Comment at 5 (“Without mandatory standards for AI audits and assessments, including those focused on measuring adverse impacts to human rights, there is an incentive for companies to ‘social wash’ their AI assessments; i.e. give investors and other stakeholders the impression that they are using AI responsibly without any meaningful efforts to ensure this.”).

357 See Grabowicz et al., Comment at 5 (“We propose AI accountability mechanisms based on explanations of decision-making processes; since explanations are automatically generated and highlight the true underlying model decision process”).