Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Cybersecurity and Privacy Accountability Mechanisms

March 27, 2024
Earned Trust through AI System Assurance

With some exceptions, the current regulatory paradigms governing cybersecurity and data privacy lack uniformity at the federal level. Many extant federal laws concerning personal data and cybersecurity focus on select industries and subcategories of data.366 While NIST has developed voluntary risk management and cybersecurity frameworks that leave entities to determine the acceptable level of risk for achieving their organizational objectives,367 the implementation of these frameworks varies across organizations and industries.368 Privacy laws also vary from state to state.

One instrument of consistent federal law is the Federal Trade Commission Act’s application to data security and privacy. In the past twenty years, the FTC has brought dozens of law enforcement actions alleging that businesses had engaged in unfair or deceptive trade practices related to data security or privacy.369 Among other things, the FTC has alleged deception where it has had reason to believe that companies have not lived up to their own public statements about their data privacy or security practices (e.g., where the companies represented that they would take reasonable or industry-standard measures but failed to do so, or where companies shared information with third parties that they had claimed would not be shared). The FTC has alleged unfair data security and privacy practices where it determined that businesses’ practices were likely to cause data security or privacy harm to consumers, and harm was not outweighed by countervailing benefits. To remedy such violations, the FTC has obtained relief, including injunctions requiring businesses to develop and implement comprehensive data security and/or privacy programs. In many cases, it required businesses to undergo third-party audits for compliance with such injunctions.370 The FTC has also promulgated guidance distilling the facts from its enforcement cases into data security lessons for companies.371

We discerned in the comments three basic perspectives on what we can learn from cybersecurity and privacy assurance practices and governance regimes: Some commenters believed that those practices and regimes should not be a model for AI. Others thought they were capacious enough to include AI assurance. Still others believed they could be extended and replicated to advance AI assurance.

For commenters who thought cybersecurity frameworks are adequate to handle AI assurance, it was partly because cybersecurity practices are mature and have been tested and refined through years of legal interpretation and application, thereby offering greater degrees of consistency and predictability.372 Indeed, existing laws and regulatory requirements that set cybersecurity standards for distinct industries already apply when AI deployments in those industries affect cybersecurity.373 In addition, there is an infrastructure for certifying cybersecurity and privacy auditors, and at least some of those certification programs are rolling out AI assurance certifications.374

Others thought that while existing cybersecurity and privacy practices are probably inadequate for AI accountability, those practices could be modified to accommodate new risks. For example, cybersecurity audits could be conducted on a regular basis to review conformity with existing standards, including the ISO/IEC 27001 information security standard and NIST’s cybersecurity framework.375 Other suggestions borrowed from the cybersecurity context including creating incentives for companies to facilitate “responsible disclosure”;376 developing red-teaming exercises;377 launching “Bug Bounty” programs to encourage disclosure and financially reward detection of AI vulnerabilities;378 and modelling AI vulnerability disclosures on the Common Vulnerabilities and Exposures (CVE) system, which provides a standardized naming scheme for cybersecurity vulnerabilities.379

These are all sound ideas that merit further consideration, especially a bounty program for AI vulnerability detection. Any federal government bodies tasked with horizontal regulation of AI should include analogous capacity to that found in the Cybersecurity and Infrastructure Security Agency (CISA), which helps organizations improve their cybersecurity practices.380 Aspects of the National Cybersecurity Strategy could also be applied to AI, including harmonizing reporting requirements, adverse incident disclosures, and risk metrics throughout the Federal government.381 As in the cybersecurity context, law enforcement is an essential companion to self-regulation.

We recommend that future federal AI policymaking not lean entirely on purely voluntary best practices. Rather, some AI accountability measures should be required, pegged to risk.382 We are convinced that AI accountability policy can employ, adapt, and expand upon existing cybersecurity and privacy infrastructure, while adopting a risk-based framework. At the same time, AI accountability poses new challenges and requires new approaches. It is to some of those new recommended approaches that the Report now turns.

 


366 Mulligan, Stephen P. and Chris D. Linebaugh, Data Protection Law: An Overview at 2, Congressional Research Service (March 25, 2019).

367 NIST, Framework for Improving Critical Infrastructure Cybersecurity Version 1.1(April 16, 2018).

368 Id.

369 See, e.g., FTC v. Sandra L. Rennert, et al., Docket No. CV-S-00-0861-JBR (D. Nev. 2000); In the Matter of Eli Lilly and Company, FTC Docket No. C-4047 (2002); In the Matter of BJ's Wholesale Club (2005), FTC Docket No. C-4148 (2005).

370 Fed. Trade Comm’n v. Wyndham Worldwide Corp., 799 F.3d 236, 257 (3d Cir. 2015). See also Federal Trade Commission, Start with Security: A Guide for Business (June 2015) (presenting “lessons” from “more than 50 law enforcement actions the FTC has announced so far” against businesses).

371 See id.

372 See, e.g., USTelecom Comment at 9.

373 For example, the FAA currently uses special conditions, as provided for in its regulations, to address novel or unusual design features not adequately addressed by existing airworthiness standards, to address cybersecurity of certain e-enabled aircraft.  This approach would be potentially extensible to AI impacting cybersecurity. See 14 CFR 11.19. For issues that become apparent after an aircraft or other aeronautical product enters the marketplace, the FAA issues airworthiness directives in appropriate cases, specifically, "FAA issues an airworthiness directive addressing a product when we find that: (a) An unsafe condition exists in the product; and (b) The condition is likely to exist or develop in other products of the same type design." See 14 C.F.R. § 39.5.

374 See, e.g., IAPP Comment at 5-6..

375 Rachel Clinton, Mira Guleri, and Helen He Comment at 2 (“Any AI system collecting any kind of data should be audited at least once a year to ensure compliance with the following: ISO (International Organization for Standardization) 27001 [and] NIST CSF (National Institute of Standards and Technology Cybersecurity Framework”).

376 See, e.g., AI Policy and Governance Working Group Comment at 3.

377 See, e.g., Anthropic Comment at 9-10; Microsoft Comment at 6-7.

378 Google DeepMind Comment at 17. Relatedly, federal agencies and departments are standing up “bias bounty” programs to address bias in AI systems. See, e.g., Matthew Kuan Johnson, Funding Opportunity from my team to build and run a DoD-wide Bias Bounty Program.

379 See, e.g., The AI Risk and Vulnerability Alliance (ARVA) Comment at 1-2.

380 See, e.g., Center for AI Safety Comment, Appendix A, at 3.

381 See The White House, supra note 292.

382 See Rachel Clinton, Mira Guleri, and Helen He Comment at 2.