Introduction

The concept of “deterrence” had not been used in international relations before the arrival of nuclear weapons. But if the concept of deterrence is broadened to include, for example, disciplining children or preventing criminal behavior, it is not an exaggeration to say that it dates back to pre-recorded history. However, in this sense the practice has only been exercised in the context of “humans deterring other humans.”

Today, this common understanding is changing. Developments in artificial intelligence (AI) and unmanned technologies are leading to the emergence of autonomous weapons systems. The world is now entering an era in which not only major powers, but also small- and medium-sized countries seek to possess these systems, and superiority in future battlefield will depend on how effectively they are used. At the same time, autonomous “thinking machines” will have a significant impact on deterrence and escalation control, because the subjects of deterrence will no longer be only human beings. It will be necessary to assume that “humans will deter machines,” “machines will deter humans,” and “machines will deter each other.”

This article examines the impact that autonomous systems equipped with AI, the “thinking machines” on the battlefield, will have on deterrence and escalation control. It will first consider the heightened risks of deterrence failure and escalation that come with the introduction of “thinking machines,” and what their appearance on the future battlefield may look like. Then, it will consider the wargame conducted by the RAND Corporation and its implications.

1. The growing risks of deterrence failure and escalation that come with “thinking machines”

Deterrence is the act, process or result of persuading an actor not to take a specific action based on calculation of cost or risk[1]. For deterrence to function, three basic requirements must be met: the opposing actor must be rational, deterrence must be backed by certain capability (deterrent) necessary to achieve it and there must be credible signaling of intentions[2]. The fact that deterrence has often failed in the past indicates it is quite difficult to meet these conditions.

It is hard to achieve deterrence because there is an inherent difficulty in understanding others. The opposing actor may be rational, but their rationality may be different from one’s own rationality. The success or failure of deterrence depends on differences in strategic culture, the limits of the unified Rational Actor assumption of state[3] and subtle differences in context. What is considered sufficient deterrence capability and how to credibly signal intentions also vary greatly depending on the opposing actor and the context. That is why deterrence needs to be “tailored” to fit the other actor and the specific context[4].

But deterrence becomes more difficult to achieve as it becomes more “tailored,” because it depends more on understanding others. What do adversaries value most, what capabilities do they fear and what signals are needed to effectively communicate intentions – these questions are not easy to answer without years of deep understanding. If these facts are unknown, both parties risk crises and escalation from misperception and miscalculation. Understanding others is difficult, even between individual humans.

What will happen when “thinking machines” enter the battlefield? The defining characteristic of autonomous systems is that the machine thinks and acts on its own. In general, the signals of deterrence will be subtle and context-specific, but “thinking machines” may not be able to understand them. This may lead to unpredictable consequences for both parties. The issue is not just that machines cannot understand humans, but it may also be difficult for humans to understand machines’ decisions, which are based on deep learning. Humans may not understand the operational principles of their own machines, let alone the machines of others. In other words, the relatively simple structure of “humans deterring humans” that has existed until now will become significantly more complex.

In this context, how to keep “humans in the loop”[5] of machines’ final operating decisions, rather than leaving the decision to the machine, becomes increasingly important. However, it is still assumed humans cannot fully comprehend and control machines, because wars with autonomous systems are fought at machine speeds, and human reaction time cannot keep up. Even if a machine does not need to make decisions quickly, if it cannot explain the basis for its decisions in a way that humans can understand, the humans involved can still only follow the machine’s decisions. The issue of how machines can “explain” the basis of their decisions to humans is critical in AI operations.

As a result of all of this, the introduction of “thinking machines” may increase the risk of unintended crises and escalation on the future battlefield. Deterrence, which has been difficult to achieve even in human interactions, may become increasingly difficult as “thinking machines” become more common.

Furthermore, it is inevitable the “thinking machines” will come to the battlefield. In future wars, the side that deploys a large number of autonomous systems and rotates the decision-making cycle (the so-called “OODA loop”[6]) faster (i.e., at machine speeds) than its rival will have an overwhelming advantage. It will be extremely difficult to stop countries from making use of these technologies.

The issue of regulating Lethal Autonomous Weapons Systems (LAWS) is currently being discussed within the framework of the Convention on Certain Conventional Weapons (CCW), but LAWS are only a small part of the "thinking machines" that will be deployed to the battlefield in the future[7], and it is not clear at this point whether an effective framework for regulation can be established. Furthermore, because autonomous systems often employ civilian technology, and many are software or algorithms without physical form, arms control and non-proliferation efforts will be much more difficult than in the case of nuclear weapons.

We are therefore entering an era in which autonomous systems will be widely used in battle, and their impact on deterrence and escalation control must be addressed.

2. The form of “thinking machines” on the battlefield

What is a “thinking machine,” and what specifically is meant by their introduction to the battlefield?
Put simply, a “thinking machine” is an autonomous system equipped with AI. There are two types of such systems. One of these is “autonomous at rest” systems, which do not have physical bodies, but rather are systems comprised of something like software. These systems primarily serve in information aggregation and decision-making support, including planning and providing expert advice. The second type of autonomous system is “autonomous in motion” systems, which have a physical body, like drones. Many imagine “thinking machines” on the battlefield will be only these physical devices, but in reality both types will be introduced.

What exactly will the battlefield introduction of “thinking machines” look like? This may be easy to visualize if we picture two axes, one that compares “autonomy at rest” and “autonomy in motion,” and one the charts whether the decision cycle is machine speed or slow enough to allow for cooperation with humans. These two axes create four quadrants[8].

An autonomous system that is “autonomous at rest” and whose decision-making cycle is “machine speed” is best represented by an automatic program to counteract cyber or electronic warfare. In cyber and electronic warfare, the attacker will employ autonomous systems that function at machine speeds, meaning the defender too must inevitably let machines make decisions and react autonomously. In such a system, human involvement in decision-making would create critical delays, so humans would be "out of the loop." This is unavoidable if defense is to be successful, but it can also create situations that increase the risk of crises and escalation.

Next, there are systems that are “autonomous at rest,” but with decision-making cycles slow enough to allow cooperation with humans. These systems are typified by battlefield decision support systems. Examples include command and control systems that fuse information and data to present an overall picture of the battlefield and support command decision-making. Another example, as noted by former U.S. Deputy Secretary of Defense Robert O. Work, is the “Centaur” (i.e., a system that enables cooperation between humans and machines)[9]. The system of F-35 stealth fighter, which displays information gathered by sensors outside the aircraft in an easy-to-understand and integrated way on the visor of the pilot’s helmet, is an example of this. These systems will always have humans “in the loop” of decisions, meaning the risk of escalation is limited. However, the limitation of putting humans “in the loop” without understanding the basis of an AI’s decision remains.

There are also autonomous systems that are “autonomous in motion” and whose decision-making cycle is “machine speed.” The best example is perhaps the Active Protection System (APS) for ground vehicles. This system automatically operates as a protective system to counter the launch of anti-tank rockets by attempting to intercept them. Missile defense in general falls into this category, including the Aegis Weapon System (AWS) on board Aegis ships and the interceptors intended to counter hypersonic weapons. These systems potentially risk escalation, because there are parts that “move,” meaning humans tend to be “out of the loop” in the decisions. This concern is less prominent in passive systems, such as air defense systems (although there is risk of misidentification, which could lead to the shooting of a civilian aircraft, for example). Moreover, it is possible to put machines in automated response mode, which basically leaves them fully autonomous. However, it is also possible to put humans “on the loop,” enabling them to intervene and make decisions if necessary. In such cases, the risk of escalation is relatively low. But, this may not be the case for offensive systems, and systems need to be evaluated on a case-by-case basis.

Finally, there are machines that are “autonomous in motion” and whose decision-making cycle is slow enough to allow for cooperation with humans. This category includes a variety of weapons, such as smart decoys, loitering weapons and strategic swarms[10], as well as autonomous military logistics systems. These systems contain elements that “move” and can be used as offensive weapons, so there is a risk of escalation. However, the relatively slow decision-making speed makes it possible to keep humans “in the loop.” Accordingly, the risk of escalation is likely low compared to systems that operate at machine speed. However, these kinds of autonomous systems will likely become most common on future battlefields.

In sum, systems with faster decision-making cycles generally pose higher risks to deterrence and escalation control than those with slower cycles. However, this trend is not necessarily universal. Systems with “autonomy in motion” also appear to pose a higher risk of escalation, but this is also not always the case. Cyber or electronic attacks (or counterattacks) can lead to serious escalation depending on their form. Much is still unknown about what impact “thinking machines” will have on the battlefield.

3. The impact of “thinking machines” in military conflicts with China

I will now examine the impact of “thinking machines” on deterrence and escalation control. This consideration will focus primarily on conflict scenarios in East Asia with China, and examine the result of a wargame conducted by the RAND Corporation in the U.S. It will also consider how the presence of “thinking machines” in conflicts complicates deterrence and escalation control.

Countries around the world are bringing “thinking machines” to the battlefield, and China in particular is investing in this endeavor. In July 2017, China unveiled its “New Generation Artificial Intelligence Development Plan,” which aims to put the country in a leading position in “all areas of AI theory, technology and application” by 2030[11]. This will naturally include the field of national defense, and China will develop “thinking machines” for the battlefield as a part of the goal set at the 19th Party Congress of “basically” completing military modernization by 2035[12]. The U.S. has also been developing AI technology for the defense sector since the release of its “Third Offset Strategy”[13] in November 2014, and the Defense Department laid out its “AI Strategy”[14] in February 2019. At this point, however, neither country has large-scale, forward deployed fighting “thinking machines,” leaving the future uncertain.

There is a real possibility that both countries will need to consider conflict scenarios in East Asia as they deploy their “thinking machines” in forward deployment at large scale. In such a situation, how should policymakers think about the risks to deterrence and escalation control between China and the U.S. and its regional allies and partners, including Japan? The RAND Corporation conducted an interesting wargame on precisely this topic[15]. The exercise involved future conflict scenarios in East Asia in which China and the U.S., as well as U.S. allies like Japan and South Korea, introduced large amounts of “thinking machines” to the battlefield. It unfolded as follows:

  • China declared that it would impose its will on the region. In response, the U.S. launched a cyberattack on a Chinese aircraft carrier, disconnecting it from “Laoshi,” China’s centralized AI decision-support system. However, due to the low level of modernization in Chinese aircraft carriers, the impact was limited.
  • Japan and the U.S. held large-scale joint exercises around the Senkaku Islands to demonstrate their defensive capabilities. During these exercises, the missile defense systems on the deployed warships were set to be fully autonomous, but China reacted by simply observing the maneuvers.
  • China asymmetrically countered Japan and the U.S. by implementing a limited blockade of Japan. Its AI directed a single destroyer to blockade only one Japanese port. The U.S. was bewildered and saw no point to the action. The blockade ended in failure.
  • Following the failure of the blockade, China adopted a more hardline position and undertook unrestricted submarine operations. China used AI and ship-based unmanned aerial vehicles (UAV) to prevent Japanese civilian ships from leaving port. The action resulted in an unmanned Japanese cargo ship being sunk.
  • Japan and the U.S. launched anti-submarine warfare (ASW) assets to counter China’s blockade. China responded by shooting down Japan-U.S. unmanned autonomous ASW aircraft. At this point Japan and the U.S. finally decided that China’s behavior was unacceptable, and they sank a manned Chinese submarine. The first loss of human life occurs.
  • In retaliation, China launched a full-scale missile attack on the Japanese and U.S. fleets around the Senkaku Islands. Japan and the U.S. responded with missile defense capabilities, but the fleet suffered some damage and casualties. Japan and the U.S. pulled their fleets out of China’s missile range, and China declared victory.
  • The wargame ended, but the U.S. said that if it had continued, the U.S. would have considered attacking Chinese aircraft carriers in retaliation.

In this wargame, both China and the U.S. used autonomous weapons systems as “thinking machines,” resulting in developments neither side predicted, as well as the eventual collapse of deterrence and escalation control.

Notably, the perception gap between China and the U.S. regarding China’s blockade and unrestricted submarine operations led to escalation in some areas. China conducted a single-ship blockade based on AI instructions, but the U.S. did not understand the implications of this action. Additionally, although China’s unrestricted submarine operations may have been seen as low-risk because the targets were unmanned cargo ships and ASW aircraft (meaning there were no human casualties), it resulted in the activation of Japanese and U.S. ASW assets, leading to escalation and the sinking of a manned Chinese submarine. In retaliation, China launched an all-out missile attack on the Japanese and U.S. fleet, leading the U.S. to consider further retaliation. The lack of clarity in the “thinking machines’” behavioral principles and the low psychological threshold of attacking an unmanned asset were factors in these developments.

Wargames depend on the participants, so it is impossible to say with certainty that the above developments will occur between China and the U.S. However, there are important implications here for future conflicts. For example, the RAND Corporation noted that changes in the dynamics of escalation and the costs of miscalculation are expected based on factors such as whether decisions are made primarily by humans or machines, and whether the physical presence on the battlefield is mostly humans or machines.

RAND said that escalation is less likely and the cost of miscalculation is lower when the decision-making system is primarily human, and the physical presence on the battlefield is primarily machine. Control is less likely to breakdown if humans are “in the loop” of operations, and if the physical presence is primarily machines then losses to enemy attacks are less painful than if human lives were lost. This reduces incentives to retaliate. In the wargame, these features were mainly seen in the actions of Japan and the U.S.

In contrast, if the decision-making system is machine-centric and the physical presence on the battlefield is primarily human (due to slow modernization), the exact opposite is true: escalation becomes extremely likely and the cost of miscalculation increases. Decisions made at machine speeds are more prone to unforeseen events because they do not allow for human intervention. Moreover, when the physical presence on the battlefield is primarily human, losses to an enemy attack are greater and the incentive to retaliate is stronger. These characteristics where mainly seen in China’s actions in the wargame.

We can therefore think of the escalation between China and Japan and the U.S. in the wargame as a result of the opposite operational compositions of their “thinking machines.” China entrusted many of its battlefield decisions to AI, which confused the U.S. and led to escalatory developments, such as the sinking of civilian ships and the downing of autonomous ASW aircraft. Additionally, because China deployed a manned submarine as its physical presence, the counterattack by Japan and the U.S. led to the loss of life, and China was forced to retaliate further.

Japan and the U.S., in contrast, attempted to limit the possibility of escalation through human decision-making and a machine-centric physical presence, but this resulted in ceding the initiative in the conflict to China. The unavoidable conclusion of RAND’s wargame is that Japan and the U.S. took a less escalatory approach in their battlefield operation of “thinking machines,” but this put them in a passive position relative to China. China, which took the opposite approach, was able to lead in determining when to escalate. According to RAND, this will be important for Japan and the U.S. to consider when operating “thinking machines” and putting humans “in the loop” of final decisions.

Conclusion

The world is approaching an era in which military superiority will depend on how effectively countries use “thinking machines” on the battlefield. Developments in AI and unmanned technologies are driving this transformation. Despite efforts like the attempt to regulate LAWS, this trend seems inevitable. However, introducing “thinking machines” to the battlefield makes it difficult to operate with humans “in the loop” of decision-making, increasing the risk of a breakdown of deterrence or escalation control. The construct of traditional deterrence was “humans deterring other humans,” but even that was difficult to achieve. Now a complex structure of “humans deterring machines,” “machines deterring humans” and “machines deterring each other” is set to emerge. It may become increasingly difficult to maintain deterrence and avoid escalation. As “thinking machines” become more common on the battlefield, we need to improve our understanding of what they mean for deterrence.

(2020/12/22)

Notes

  1. 1 Alexander L. George and Richard Smoke, Deterrence in American Foreign Policy: Theory and Practice, Columbia University Press, 1974, p.11.
  2. 2 Junichi Fukuda, “’Complex,’ ‘Full-Spectrum’ and ‘Cross-Domain’ Deterrence,” Air Power Studies, Air Command and Staff College, vol. 5, December 2018, pp. 50-53.
  3. 3 This way of thinking abandons the interests and preferences of individuals and organizations within a country and assumes that the state is a single rational actor.
  4. 4 “Tailored deterrence” is a concept that came into use in the U.S.’ 2006 National Security Strategy. The White House, The National Security Strategy of the United States of America, March 2006, p.43.
  5. 5 In general, there are three ways autonomous systems operate: “Human-in-the-loop,” in which humans are the final decision makers, “Human-out-of-the-loop,” in which decision-making is left entirely to the machine, and “Human-on-the-loop,” in which machines act on their own with human supervision and intervention as necessary. Paul Scharre, Army of None: Autonomous Weapons and the Future of War, Hayakawa Publishing (Japanese), 2019, chapter 3.
  6. 6 The OODA loop is a model of decision-making and action proposed by John R. Boyd, a colonel in the U.S. Airforce. The main idea is that the quicker the cycle of Observe, Orientation, Decision and Action, the greater the advantage in battle. John R. Boyd, “Destruction and Creation,” September 1976.
  7. 7 For example, the U.S. Department of Defense defines an Autonomous Weapon System (AWS) as "a weapon system that, once activated, can select and engage a target without further intervention by a human operator.” U.S. Department of Defense, "Directive 3000.09 on Autonomy in Weapon Systems," November 21, 2012 (Incorporating Change 1, May 8, 2017).
  8. 8 Yuna Huh Wong, et al., Deterrence in the Age of Thinking Machines, RAND Corporation, 2020, p.78, Figure 8.2.
  9. 9 Sydney J. Freedberg Jr., “Centaur Army: Bob Work, Robotics, & The Third Offset Strategy,” Breaking Defense, November 9, 2015.
  10. 10 Smart decoys are decoys that can make autonomous decisions, loitering weapons are autonomous weapons systems that can autonomously search for and attack targets, and strategic swarms are autonomous weapons systems that can have a similar strategic effect to that of weapons of mass destruction through concentrated operations of massive numbers of drones.
  11. 11 Yoichi Taya, “China Aims to Become an Artificial Intelligence (AI) Powerhouse,” Japan Research Institute, RIM Pacific Business and Industries, vol. 18, no. 69, 2018, pp. 113-117. (Japanese)
  12. 12 Xi Jinping, “X. Staying Committed to the Chinese Path of Building Strong Armed Forces and Fully Advancing the Modernization of National Defense and Military,” Report at the 19th CPC National Congress, China Daily, October 18, 2017.
  13. 13 U.S. Secretary of Defense, “Memorandum: Defense Innovation Initiative,” November 15. 2014.
  14. 14 U.S. Department of Defense, “Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity,” February 12. 2019.
  15. 15 Yuna Huh Wong, et al., Deterrence in the Age of Thinking Machines, p.39-58.