An official website of the United States government
Here's how you know
A .gov website belongs to an official government organization in the United States.
A lock (lock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

The Owl in the Olive Tree | Aug. 28, 2020

Trust, Confidence, and Organizational Decisions about AI Adoption: The Impact for US Defense

By Michael C. Horowitz

Potentially rapid advances in autonomous systems and artificial intelligence (AI) raise important questions about how technology affects human behavior inside and outside the military domain. As ever, the effective adoption and use of emerging technologies is much more about people and organizations than about the technology itself. The most important AI challenge facing the United States military is not technology development, but avoiding technology hype and aligning human and organizational incentives to ensure the reliable, effective, and safe adoption of algorithms and autonomous systems in areas where they will improve US national security.

Emerging technologies often present challenges and opportunities to rising powers, established powers, and non-state actors alike. The challenges, as well as the opportunities, are magnified when, as in the case of autonomous systems, an emerging technology broadly impacts both the military and commercial arenas. Machine autonomy in the middle part of the 21st century could be akin to the combustion engine at the dawn of the 20th century—something with broad effects across nearly everything in how both companies and militaries operate. Incorporating emerging technologies is organizationally challenging, however, when it disrupts status hierarchies and there are questions about how to safely and responsibly utilize new technologies. Moreover, there is always a risk that hype about technology can outstrip the reality of technological performance. There are risks both in investing too little in AI and in investing the right amounts of money but in the wrong ways. Yet, countries and corporations around the world are investing billions and accelerating their development and use of algorithms and autonomous systems. The challenge they all face is one of appreciating the key issues impacting the adoption of AI at the tactical, operational, and strategic levels.

Trust, Confidence, and AI Adoption
At the operational level, ethics, trust, and perceptions of risk will influence commanders’ use of autonomy in developing campaign plans and leading operations in combat. When are commanders willing to deploy weapon systems with different degrees of autonomy? Are they willing to integrate them into campaign planning? A key element of answering these questions requires understanding the gap between the effectiveness of algorithms and perceptions of their effectiveness.

Early in the development of technologies, there is often a period of hype where perceptions of effectiveness far exceed the actual capabilities of the technology. This can be even truer when thinking about general purpose technologies such as electricity or the combustion engine. As a general purpose technology, AI has certainly been the subject of a great deal of hype. Moreover, as a general purpose technology, AI is not simply a widget. There are a large number of different potential uses of algorithms and autonomous systems (which may or may not incorporate artificial intelligence, depending on the specific implementation), from logistics planning to improve the efficiency of military supply chains to autonomous vehicles to algorithms designed to assist in the identification and tracking of potential targets.

For any given application of AI, as the quality of the algorithm improves, there is then the potential for the opposite problem of technology hype—the development of a trust gap. Trust gaps occur when technologies are more effective than people believe they are. For example, Google Maps first launched in 2005 and became a mobile app in 2008. Initially, some were skeptical of Google Maps, wondering how the system could possibly know shortcuts people take to get from home to work, and other choices people make when in their cars. But, it turned out, the Google Maps algorithm worked very well, especially in combination with frequent updating and cultivating by human programmers.

Over time, as technologies prove themselves useful, trust gaps might disappear, but they can be replaced by something potentially dangerous—overconfidence in technology. Technological overconfidence occurs when people believe so much in technology that they stop using human cognition and their own judgment to evaluate the behavior of a machine. One example is people trusting Google Maps so much that they drive their car off a bridge, following guidance from the algorithm.

In the context of AI, research on automation bias demonstrates how the risk of accidents and other mistakes can increase as the capabilities of algorithms improve. This seems like a paradox. Why would improved algorithms make errors more likely? The reason is the psychological tendency of humans to outsource cognition to an algorithm once it proves itself to be sufficiently effective. The problem with this cognitive outsourcing to algorithms is that even if an algorithm is correct 98 percent or 99 percent of the time, those remaining 1-2 percent may be exactly the kind of situations that causes military incidents, such as in the Patriot Missile fratricide in 2003. In that case, one US and one British aircraft were shot down by a Patriot battery. The autonomous and near autonomous features of the Patriot system helped contribute to the accidents.

Application to Nuclear Weapons
Recognizing the challenges presented by the trust gap and the danger of automation bias can help us understand how applications of autonomous systems and AI could shape the nuclear domain. The potential uses of AI and their impact on the nuclear domain arise in relation to three areas: early warning and command and control; uninhabited platforms with nuclear weapons; and the potential impact of conventional applications of AI on strategic stability.

Automation bias is an important issue in early warning and command and control, in particular. In 1983, Soviet military officer Stanislav Petrov received a computer readout suggesting a US nuclear first-strike against the Soviet Union. Rather than reporting an incoming US nuclear attack to his superiors, which could have triggered a nuclear war, he instead reported a computer error. The challenge for the future is twofold. First, ensuring that there is always a human in the loop when it comes to tasks such as early warning. This will serve as a check against algorithmic errors that could risk escalation. Second, the Petrov’s of the future need a basic education in and knowledge of AI if they are going to be expected to process outputs from algorithms and make judgments about whether they are correct. Expanding AI literacy in the defense world is essential to successful AI adoption.

A critical task for the American military is turning the rhetoric about the importance of artificial intelligence for the future of war into reality. But success is most likely with a realistic vision of what AI can help the military accomplish—including separating hype from reality—and especially an appropriate emphasis on safety and training to decrease the risks that trust gaps and automation bias undermine the effective adoption and use of AI.

Associated Reading
Michael C. Horowitz and Lauren Kahn. 2020. The AI Literacy Gap Hobbling American Officialdom. War on the Rocks. January 14.
Michael C. Horowitz, Paul Scharre, and Alex Velez-Green. 2020. A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence.
Michael C. Horowitz and Casey Mahoney. 2018. Artificial Intelligence and the Military: Technology is Only Half the Battle. War on the Rocks. December 25.
Various Authors. 2020. Penn Input to National Security Commission on Artificial Intelligence. Perry World House, February 10.
Various Authors. 2020. Roundtable on Artificial Intelligence and International Security. Texas National Security Review. June 2.

Biography
Michael C. Horowitz is Richard Perry Professor and Director of Perry World House, University of Pennsylvania.

Associated Minerva Project
The Disruptive Effects of Autonomy: Ethics, Trust, and Organizational Decision-making

Supporting Service Agency
Air Force Office of Scientific Research

Nota Bene
Content appearing from Minerva-funded researchers—be it the sharing of their scientific findings or the Owl in the Olive Tree blog posts—does not constitute Department of Defense policy or endorsement by the Department of Defense.