Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Uncertainty in Future Risks and Benefits

Earned Trust through AI System Assurance

Many benefits and harms of foundation models are already occurring. However, some of these risks are yet specula tive or unforeseen, while other risk/benefit areas can be identified. The potential future outcomes are so uncertain that effective, definitive, long-term AI strategy-setting is difficult. Indeed, some effects may be considered harms by some people and benefits by others.208 While this is true for many of the benefits and risks discussed in this report, this observation particularly complicates classifying un certain futures as beneficial or harmful, which can imply a level of confidence that does not exist.

Importantly, new effects arise not from foundation models alone, but from where and how technology interacts with people and systems, such as the economy or our social relationships.209 Human creativity is a major driver of these new effects, and openness allows more human creativity to access dual-use foundation models. The economic impacts of AI, and in particular of large foundation models, become far more powerful when any company can tap into them, diversifying the number of use cases of these models.210 As is the case with any new technology, societal risks can emerge. For example, the ability to use dual-use foundation models to create deepfakes is problematic when a foreign agent can use it to disrupt American elections,211 but the risk is magnified when high school children everywhere can make a fake video of their class mates.212 Bias encoded in foundation models becomes far more powerful when those models are used to determine lending,213 health care,214 and prison terms.215 AI has more effects when it is more capable and when it interacts with more people and systems. Technological testing and evaluation are useful for examining models based on technical capabilities, but cannot anticipate the many ways that a model might be used when set loose in society.

Openness tends to increase the number of new effects and uses of technologies, including foundation models. Thus, open foundation models will, generally speaking, likely increase both benefits and risks posed by foundation models. However, it is important to note that there is significant uncertainty around the harms/benefits of any specific use,216 and closed models carry their own unique benefits and risks.

Though dual-use foundation models are relatively new technologies, the challenge of adapting to unknown technological effects is not. As the World Wide Web became popular, proponents claimed it was a place to collaboratively come together.217 Those proponents did not anticipate that this connection could, ironically, also lead to loneliness.218 Advanced AI technologies are a particularly challenging policy problem, because AI develops so quickly219 and business goals, rather than public interest, largely drive innovation.220 Future policymakers will need to monitor risks and be adaptive as technology and society changes.

Methods for managing uncertain problems like advanced AI have been studied under a variety of frameworks.221 Approaches to deal with the deep uncertainty around AI specifically include broad stakeholder participation,222 explicit and repeated evaluation of values used for decision-making,223 a focus on identifying and understanding hidden or potentially emergent issues to inform policymakers,224 mapping out potential futures and scenarios,225 and setting up dynamic plans involving potential actions, evaluations, and timelines for reevaluation under changing conditions.226

Summary of Risks and Benefits

Wide availability of open model weights for dual use foundation models could pose a range of marginal risks and benefits. But models are evolving too rapidly, and extrapolation based on current capabilities and limitations is too difficult, to conclude whether open foundation models, overall, pose more marginal risks than benefits (or vice versa), as well as the isolated trade-offs in specific sections. For instance, how much do open model weights lower the barrier to entry for the synthesis, dissemination, and use of CBRN material? Do open model weights propel safety research more than they introduce new misuse or control risks? Do they bolster offensive cyber attacks more than propel cyber defense research? Do they enable more discrimination in downstream systems than they promote bias research? And how do we weigh these considerations against the introduction and dissemination of CSAM/NCII content? The following policy approaches and recommendations consider these uncertain factors and outline how the U.S. government can work to assess this evolving landscape.

 

Next: Policy Approaches and Recommendations

 


208 Faverio, M. and Tyson, A. (2023). What the data says about Americans’ views of artificial intelligence. Pew Research Center.

209 Sartori, L. and Theodorou, A (2022). A sociotechnical perspective for the future of AI: narratives, inequalities, and human control. Ethics and Information Technology, Vol 24, 4.

210 See, e.g., Uber Comment at 2 (“...open-source models create a more level playing field and lower barriers to entry for AI use, ensuring that more individuals and organizations can access and improve upon existing technology.”).

211 Verma, P. & Zakrzewski, C. (April 23, 2023). AI deepfakes threaten to upend global elections. No one can stop them. Washington Post.

212 Singer, N. (April 8, 2023). Teen Girls Confront an Epidemic of Deepfake Nudes in Schools. New York Times.

213 Sadok, H., Sakka, F. & El Maknouzi, M. (2022). Artificial intelligence and bank credit analysis: A Review. Cogent Economics Finance, 10(1).

214 Juhn, Y. et al (2022). Assessing socioeconomic bias in machine learning algorithms in health care: a case study of the HOUSES index. Journal of the American Medical Informatics Association, 29(7), 1142-1151.

215 Polonski, V. (2018). AI is convicting criminals and determining jail time, but is it fair? World Economic Forum: Emerging Technologies.

216 See, e.g., Hugging Face Comment at 10 (“In most cases, the risks associated with open-weight models are broadly similar to any other part of a software system (with or without AI components), and are similarly context-dependent.”) (citations and emphasis omitted); OpenAI Comment at 3 (“As AI models become even more powerful and the benefits and risks of their deployment or release become greater, it is also important that we be increasingly sophisticated in deciding whether and how to deploy a model. This is particularly true if AI capabilities come to have significant implications for public safety or national security. The future presence of such ‘catastrophic’ risks from more advanced AI systems is inherently uncertain, and there is scholarly disagreement on how likely and how soon such risks will arise.”) (quotation marks in original); Mozilla Comment at 11 (“There is so much unknown about the benefits of AI, and policymakers must not ignore this.”); Microsoft Comment at 15 (“Moreover, even when model and application developers take all reasonable precautions to assess and mitigate risks, mitigations will fail, unmitigated risks will be realized, and unknown risks will emerge. These risks could range from generating harmful content in response to a malicious prompt to the intentional exfiltration of an advanced AI model by a nation state actor.”). Cf. JHU Comment at 9 (“Given the uncertain nature of current and future open model capabilities, and the importance of open software, we are not suggesting that the DOC should impose export controls on dual-use foundation models today. Rather, the risks posed by open, biologically capable dual-use foundation models are grave enough for the US government to prepare such policy options so they can be deployed when and if they become relevant.”).

217 Web’s inventor gets a knighthood. (December 31, 2003). BBC News.

218 Cai, Z. et al. (2023). Associations Between Problematic Internet Use and Mental Health Outcomes of Students : A Meta-analytic Review. Quantitative Review 8(45); Katz, L. (June 18, 2020). How tech and social media are making us feel lonelier than ever. CNET.

219 Nestor Maslej, et al. Artificial Intelligence Index Report 2024. (2024 April). [NEED URL] at 14 (“Between 2010 and 2022, the total number of AI publications nearly tripled, rising from approximately 88,000 in 2010 to more than 240,000 in 2022.”).

220 Nestor Maslej, et al. Artificial Intelligence Index Report 2024. (2024 April). [NEED URL] at 46 (“Until 2014, academia led in the release of machine learning models. Since then, industry has taken the lead.”).

221  Marchant, G. (2020). Governance of Emerging Technologies as a Wicked Problem. Vanderbilt Law Review 73(6); Holtel, S. (2016). Artificial Intelligence Creates a Wicked Problem for the Enterprise. Procedia Computer Science 99, 171-180; Also see generally, Head, B. & Alford, J. Wicked Problems: Implications for Public Policy and Management. Administration & Society 47(6); Walker, W., Marchau, V. & Swanson, D. (2010). Addressing deep uncertainty using adaptive policies: Introduction to section 2. Technological Forecasting and Social Change 77(6), 917-923.

222 The National Artificial Intelligence Advisory Committee (NAIAC) (2023). RECOMMENDATIONS: Generative AI Away from the Frontier; Kwakkel, J., Walker, W.,& Haasnoot, M. (2016). Coping with the Wickedness of Public Policy Problems: Approaches for Decision Making under Deep Uncertainty. Journal of Water Resources Planning and Management 142(3).

223 Holtel, S. (2016). Artificial Intelligence Creates a Wicked Problem for the Enterprise. Procedia Computer Science 99, 171-180; Berente, N., Kormylo, C., & Rosenkranz, C. (2024). Test-Driven Ethics for Machine Learning. Communications of the ACM 67(5), 45-49.

224 Liu, H. & Maas, M. (2021). ‘Solving for X?’ Towards a Problem-Finding Framework to Ground Long-Term Governance Strategies for Artificial Intelligence. Futures 126(22).

225 Maier, H.R. et al (2016). An uncertain future, deep uncertainty, scenarios, robustness and adaptation : How do they fit together? Environmental Modelling & Software 81, 154-164; RAND. Robust Decision Making. Water Planning for the Uncertain Future.

226 Haasnoot, M., Kwakkel, J. H., Walker, W. E., & ter Maat, J. (2013). Dynamic adaptive policy pathways: A method for crafting robust decisions for a deeply uncertain world. Global Environmental Change, 23(2), 485–498.