Introduction
South Korea convened an international summit in Seoul on 8th September, 2024 with the aim of establishing a framework for the responsible use of artificial intelligence (AI) in military applications. The two-day summit, which included representatives from over 90 countries, including the United States and China, highlighted the pressing need to address the ethical and operational concerns surrounding the use of AI in warfare. Although a formal, legally binding agreement was not anticipated, the summit sought to outline guidelines and principles to ensure that AI technologies in the military are used responsibly.
The Growing Concern Over AI in Military Applications
AI is rapidly transforming many aspects of military operations around the world. It offers unprecedented capabilities in terms of operational efficiency, decision-making, and intelligence gathering. However, these advancements come with significant risks, particularly in autonomous systems, where machines could potentially make life-and-death decisions without human oversight.
South Korean Defense Minister Kim Yong-hyun emphasized these dual aspects of AI in the military in his opening address. Citing Ukraine’s use of AI-enabled drones in the ongoing Russia-Ukraine war, Kim described how AI has provided Ukrainian forces with a technological advantage, particularly in overcoming signal jamming and coordinating unmanned aerial vehicles (UAVs). He compared the technology to “David’s slingshot,” highlighting how AI can be used to level the playing field in asymmetric conflicts.
“AI dramatically enhances the military’s operational capabilities, but it is like a double-edged sword,” Kim warned. “The same technology can cause significant damage if abused.”
The First Summit and Modest Global Progress
The Seoul summit is the second international gathering focused on the military use of AI. The first summit, held in The Hague, Netherlands, in 2022, saw the United States, China, and several other countries endorse a “call to action” for responsible AI use in military applications. However, this endorsement lacked any legally binding commitments, reflecting the difficulty of securing enforceable international agreements on such a complex and evolving issue.
The call to action from The Hague summit laid the groundwork for ongoing dialogue but stopped short of addressing the full spectrum of ethical and legal challenges posed by AI in warfare. The modest outcomes of that summit have left many stakeholders seeking more concrete measures to prevent the misuse of AI, particularly in the development of lethal autonomous weapons systems (LAWS) and other potentially harmful military applications.
Key Focus Areas: Legal Compliance and Human Oversight
One of the central topics of discussion at the Seoul summit was how to ensure compliance with international law in the use of AI for military purposes. South Korean Foreign Minister Cho Tae-yul noted that the summit’s participants were addressing the legal ramifications of AI use in warfare and exploring ways to prevent autonomous weapons systems from making critical decisions without human input. This is a crucial issue, given the potential for AI to be used in contexts where the decisions it makes could have devastating humanitarian consequences.
“Ensuring that autonomous weapons are not making life-and-death decisions without appropriate human oversight is a key area of concern,” Cho said. This aligns with ongoing discussions at the United Nations, particularly among nations that are signatories to the 1983 Convention on Certain Conventional Weapons (CCW). The CCW is currently considering restrictions on LAWS to ensure that such technologies comply with international humanitarian law.
AI and the Ethics of Autonomous Weapons
The rise of autonomous weapons systems is one of the most contentious issues in AI and military discussions. While AI can improve efficiency and effectiveness in combat, the deployment of fully autonomous weapons raises profound ethical concerns. The central question is whether machines should be allowed to make decisions about life and death, and if so, under what circumstances and with what safeguards.
In response to these concerns, several countries have called for a moratorium on the development of fully autonomous weapons. Human Rights Watch and other advocacy groups have urged the international community to ban such weapons outright, citing the inability of machines to reliably distinguish between combatants and civilians in complex battle environments.
The United States has taken steps to address some of these concerns through its Declaration on Responsible Use of AI in the Military, which was launched in 2023. This declaration goes beyond weapons systems and addresses broader military applications of AI, from logistics to surveillance. As of August 2024, 55 countries have endorsed this declaration, signaling growing international support for setting standards on military AI use, though without legally binding commitments.
The Role of the Private Sector in Military AI
One of the unique aspects of the AI-military debate is the significant role played by the private sector in developing AI technologies. Many of the most advanced AI systems are being created by private companies rather than government agencies. This raises additional concerns about accountability and oversight, particularly when private sector innovations are adapted for military use.
The Seoul summit emphasized the importance of ensuring that discussions about military AI include a wide range of stakeholders, including representatives from the private sector and academia. Approximately 2,000 people registered to participate in the summit, representing international organizations, private companies, and academic institutions.
The co-hosting of the summit by countries such as the Netherlands, Singapore, Kenya, and the United Kingdom underscores the need for a diverse, multi-stakeholder approach to the issue. By involving private sector actors in these discussions, the summit aimed to address the ethical and operational challenges of military AI in a comprehensive way.
Toward a Blueprint for Responsible AI in the Military
The primary objective of the Seoul summit was to draft a blueprint for responsible AI use in the military, outlining the minimum standards and principles that nations should follow. According to a senior South Korean official, the blueprint was expected to reflect principles laid out by NATO, the United States, and other countries that have developed their own guidelines on AI use in military operations.
However, it remained unclear how many nations would ultimately endorse the blueprint by the conclusion of the summit. Although the document was expected to provide more detailed guidance than the “call to action” from The Hague, it was still unlikely to result in any legally binding commitments.
The difficulty in securing binding agreements on AI in the military highlights the tension between national security interests and the need for international cooperation. Countries like the United States and China, which are at the forefront of AI development, are unlikely to commit to binding agreements that could limit their technological advantages in military contexts. At the same time, smaller nations and advocacy groups are calling for stronger safeguards to prevent the misuse of AI in warfare.
The Ongoing Global Dialogue on Military AI
The Seoul summit is part of a broader, ongoing international dialogue about the role of AI in military applications. In addition to the CCW discussions at the United Nations, there have been numerous other efforts to address the ethical and legal challenges posed by military AI. These include initiatives from NATO, the European Union, and various international non-governmental organizations.
The rapid pace of AI development means that these discussions are more urgent than ever. As AI systems become more sophisticated, the potential for misuse increases, and the need for international guidelines becomes more pressing. While the Seoul summit may not result in binding agreements, it is an important step toward ensuring that the use of AI in the military is governed by ethical principles and legal standards.
Conclusion
The international summit hosted by South Korea in Seoul is a significant milestone in the global effort to establish responsible guidelines for AI use in military applications. While the blueprint for action discussed at the summit is unlikely to have binding powers, it reflects growing international recognition of the need to address the risks posed by AI in warfare. As technological developments continue to outpace regulatory efforts, ongoing dialogue and cooperation between nations, the private sector, and other stakeholders will be crucial in ensuring that AI is used in ways that enhance security without compromising ethical standards or human rights.