As militaries around the world seek to gain a strategic edge over their adversaries by integrating artificial intelligence (AI) innovations into their arsenals, how can members of the international community effectively reduce the unforeseen risks of this technological competition? We argue that pursuing confidence-building measures (CBMs), a class of information-sharing and transparency-enhancing arrangements that states began using in the Cold War to enhance strategic stability, could offer one model of managing AI-related risk today. Analyzing the conditions that led to early CBMs suggests such measures, however, will unlikely succeed today without being adapted to current conditions. This article uses historical analogies to illustrate how, in the absence of combat experiences involving novel military technology, it is difficult for states to be certain how these innovations change the implicit rules of warfare. Pursuing international dialogue, in ways that borrow from the Cold War CBM toolkit, may help speed the learning process about the implications of military applications of AI in ways that reduce the risk that states’ uncertainty about changes in military technology undermine international security and stability.
View full article by clicking here.
Associated Minerva-funded project:
The Disruptive Effects of Autonomy: Ethics, Trust, and Organizational Decision-making