Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Geopolitical Considerations

Earned Trust through AI System Assurance

This section highlights the marginal risks and benefits related to the intersection of open foundation models and geopolitics. The availability of model weights could allow countries of concern to develop more robust advanced AI ecosystems, which, given the dual-use nature of foundation models, could pose risks to national security and public safety, and undercut the aims of U.S. chip controls. These countries could take U.S.-developed models and use them to enhance, perhaps substantially, their military and intelligence capabilities. There are also risks that may arise from the imposition of international standards that are not in line with U.S. values and commercial interests. On the benefits side, encouraging the availability of model weights could bolster cooperation with allies and deepen new relationships with developing partners.

Geopolitical Risks of Widely Available Model Weights

Implications for Global Digital Ecosystem

The wide availability or restrictions of U.S. origin model weights could have unpredictable consequences for the global digital ecosystem, including by prompting some states to restrict open-source systems, causing further fragmentation of the Internet based on level of AI open ness (i.e., a “splinter-net scenario”).

More restrictive jurisdictions may also be better placed to set the terms of AI regulation and standards more broadly, even if this comes at the cost of innovation. States looking for how to regulate AI generally, including open models, may naturally look to imitate or adapt already existing regulatory approaches. Regulatory requirements would breed demand for standards. The United States would in this scenario pay a penalty in its ability to shape international standards on AI, even if it is still at a net advantage in setting standards.

Inconsistencies in approaches to model openness may also divide the Internet into digital silos, causing a “splin ter-net” scenario. If one state decides to prohibit open model weights but others, such as the United States, do not, the restrictive nations must, in some way, prevent their citizens from accessing models published elsewhere. Since developers usually publish open model weights on line, countries that choose to implement stricter measures will have to restrict certain websites, as some countries’ websites would host open models and others would not.

Accelerate Dual-Use AI Innovation in Countries of Concern

Countries of concern can leverage U.S. model weights for their own AI research, development, and innovation, which, given the dual-use nature of foundation models, may introduce risk to U.S. national security and public safety.

GENERAL AI INNOVATION

Many U.S. open foundation models are more advanced than most closed models developed in other nations, in cluding countries of concern.82 Developers in these coun tries may choose to use U.S. open models instead of ex pending the resources or time necessary to build their own.

This development bolsters dual-use AI innovation in these countries and could help them create advanced techno logical ecosystems that they (1) may not have been able to build otherwise and (2) do not have to expend significant resources or time to create. Individual companies can al locate the funds they would have spent on up-front training to downstream product enhancements, improved dissemination tactics (e.g., marketing), and new products. Governments that would have subsidized AI training could also apply saved resources to other initiatives, such as talent cultivation, other technologies, and various other sec tors that provide economic and security benefits.

Actors in these nations can also glean insights from open models about model architecture and training techniques, bolstering their long-term dual-use AI innovation.83 Leading labs have already developed models based on Llama 2’s architecture and training process with similar capabilities to Llama 2.84

This dual-use foundation model R&D could then acceler ate global competition to build powerful AI-enabled national security applications and undercut the aim of U.S. export controls on semiconductors and related materials.

DUAL-USE IMPLICATIONS

Countries of concern could incorporate open foundation models into their military and intelligence planning, capabilities, and deployments. Due to the lower-cost nature of models with widely available model weights and the ability to operate these systems in network-constrained environments, it is easier for countries of concern to integrate dual-use foundation models in military and intelligence applications. Actors could experiment with foundation models to advance R&D for myriad military and intelligence applications,85 including signal detection, target recognition, data processing, strategic decision making, combat simulation, transportation, signal jams, weapon coordination systems, and drone swarms. Open models could potentially further these research initiatives, allowing foreign actors to innovate on U.S. models and discover crucial technical knowledge for building dual-use models.

Some foreign actors are already experimenting with this type of AI military and intelligence research and applications. Since actors can inference on open model weights locally, U.S. developers cannot determine when and how countries of concern are using their model weights to research military and intelligence innovations. As open models improve over time, countries of concern could potentially use them to create even more dangerous systems, such as adaptive agent systems and coordinated weapon systems. Making model weights widely available may also allow countries of concern to coordinate their AI development, allowing them to develop specialized models depending on their needs and diversify spending. For example, a country of concern could invest in a specialized model to assist in creating CBRN weapons, or create models focused on misinformation and disinformation.

The U.S. government has already recognized the national security threat of countries of concern gaining access to powerful, dual-use leading models and worked to mitigate it through actions such as restrictions on advanced com puting chips and semiconductor manufacturing equipment exports to certain nations. These controls attempt to limit adversaries access to computing power, which slows their development of advanced AI models related to military applications. Widely available model weights could allow strategic countries of concern to avoid U.S. policy instruments designed to hinder computational capabilities. Strategic countries of concern would not need to make the same investments in semiconductors and other hardware in order to produce similarly high performing AI models with the wide availability of model weights.86 Thus, the open release of some foundation models may speed up the dual-use AI innovation and military application of AI by countries of concern, directly propelling the industry and national security capabilities that export controls and other U.S. actions aim to restrict.

The open release of some specific model weights could undermine U.S. technological leadership by distributing access to cutting-edge models—and the associated technological power—to foreign adversaries.

Some have argued that models accessible only through APIs pose fewer national security risks.87 OpenAI commented that it had already taken steps to disrupt certain malicious nation-state affiliated actors who were using ChatGPT for cyber operations, which has been enabled in part through its use of a closed model distributed via an API.88 Notably, the fact that these operations were con ducted using closed models suggests that adversaries might used closed models even when open foundation models are available.

FURTHER RESEARCH

While open model weights could bolster AI military innovations in an untraceable, unregulated manner, the marginal risk open models pose to overall military and intelligence R&D, as well as the amount U.S. open models support country-of-concern AI development for military and civil technologies, is unknown. The trade-off between country-of-concern usage of open models compared to their own domestic national security research also re mains uncertain and depends on the country.

Geopolitical Benefits of Widely Available Model Weights

Increase Global Use of U.S. Open Models

Widely available model weights could increase global adoption of U.S. origin models, thereby promoting the development of a global technology ecosystem around U.S. open models rather than competitors’ open models. A burgeoning foundation model ecosystem does not appear overnight; creating the necessary infrastructure, such as datacenters and software ecosystems, talent cultivation systems, such as AI education and training, and even a creative, innovation start-up culture requires significant time, funding, and consideration. If the option to use open U.S. models exists, foreign states and non-state actors may not want to invest the time, money, and energy necessary to create their own foundation model market. This incentive structure could increase foreign adoption of U.S. origin open models, which are currently less powerful than proprietary models. Additionally, widespread use of U.S. open models would promote the United States’ ability to set global norms for AI, bolstering our ability to foster glob al AI tools that promote the enjoyment of human rights.

Foster Positive Relationships across the Globe

Nurturing the open model ecosystem could foster positive relationships between the United States, allies and other countries that benefit from an open model ecosystem, such as developing partners, which may lead to enhanced international cooperation as well as the promotion of human rights through technological cooperation. Countries that have more open AI model ecosystems can more easily participate in international technological cooperation and crucial research exchanges, while, in turn, supporting their own open AI model ecosystems.

The safe, secure, and trustworthy deployment of AI re quires coordination with allies and partners. Many U.S. allies have expressed an interest in maintaining an open AI ecosystem. Supporting an AI ecosystem that includes foundation models with widely available model weights could also bolster existing U.S. alliances, as well as potentially enhance partnerships that continue to mature. For example, economies that actively support their open model ecosystems include the Republic of Korea, Taiwan, Philippines, France, and Poland. These like-minded partners are critical in the current geopolitical landscape, and further cooperation with them in the realm of foundation models with widely available model weights would only serve to deepen ties and reinforce these mutually beneficial relationships.

By building relationships with other like-minded partners, U.S. allies can promote models with widely available mod el weights to third countries. This will become increasingly important as AI is adopted in global majority nations. The more the United States and its likeminded partners coordinate on creating a narrative that U.S. models with widely available model weights can spur innovation and pro mote competition, the more adoption of U.S. AI models will take place in developing countries. Coordination with like-minded partners will also be important to generally manage risks that arise from AI.89 Moreover, if developing partners have access to U.S. open models, they may use open models from foreign countries of concern models less, if at all.

Promote Democratic Values in the Global AI Ecosystem

U.S. open model weights could steer the technological frontier towards AI that aligns with democratic values. The U.S. spearheading frontier AI development, and other nations building models on the foundation of U.S. open models, may increase the likelihood that the many cutting-edge AI technologies, along with correlated training techniques, safety and security protections, and deployment strategies, are built to uphold democratic values. However, we would note that it is not a given that AI applications built on open model weights will preserve democratic values.

 

Next: Societal Risks and Well-Being

 


82 Nestor Maslej, et al. Artificial Intelligence Index Report 2024. (2024 April).

83 Markus Anderljung et al. (2023, July 6). Frontier AI Regulation: Managing Emerging Risks to Public Safety. ArXiv.

84 Center for a New American Security Comment at 9 (“[L]eading Chinese labs have applied the architecture and training process of Meta’s Llama models to train their own models with similar levels of performance.”).

85 Szabadföldi, István. (2021). Artificial Intelligence in Military Application – Opportunities and Challenges. Land Forces Academy Review. 26. 157-165. 10.2478/raft-2021-0022.

86 Mozur, P. et al. (2024, February 21). China’s Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology. NYTimes.

87 See Transformative Futures Institute comment at 2 (“To maximize the benefit of foundation models, developers can offer structured access to models (e.g., through APIs) to a wide range of users while maintaining the capability to prevent harmful misuse, block or filter dangerous content, and conduct continual safety evaluations.”).

88 See OpenAI Comment at 2 (” For example, we recently partnered with Microsoft to detect, study, and disrupt the operations of a number of nation-state cyber threat actors who were abusing our GPT-3.5-Turbo and GPT-4 models to assist in cyber-offensive operations. Disrupting these threat actors would not have been possible if the weights of these at-the-time leading models had been released widely, as the same cyber threat actors could have hosted the model on their own hardware, never interacting with the original developer.”).

89 See RAND comment at 4 (“Coordination with foreign partners will be critical to manage risks from open foundation models. Attempts to control any open foundation models should consider which foreign actors could re-create equally capable models, as those operating in”) ; See Connected Health Initiative at 4 (“Support international harmonization. We urge NTIA to maintain a priority for supporting risk-based approaches to AI governance in markets abroad and through bilateral and multilateral agreements.”).