An official website of the United States government
A .gov website belongs to an official government organization in the United States.
A lock (lock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Algorithmic Decision-Making in National Security

Neil Shortland, University of Massachusetts, Lowell

Machine-Moderated Moral Injury: Exploring the Double-Edged Sword of Algorithmic Decision-Making in National Security

Principal Investigator: Neil Shortland, University of Massachusetts, Lowell

Years of award: 2024-2027

Managing Services Agency: Army Research Office

Project Description:
The proposed project seeks to investigate how the use of emerging forms of Artificial Intelligence (AI) within national security can create traumatic outcomes to both the national security workforce and society at large. At the forefront of this technological innovation is the desire to use technology to assist with decision-making. Here the world of emerging technologies offers great promise, diminishing the role of the human in the decision-making cycle with a vision to increasingly autonomous AI, even in morally laden decisions. When it comes to national security and technology, while individuals have extensively debated the moral and ethical implications of using new technologies, what is often forgotten is the moral and ethical implications for the individuals who work with and alongside these technologies. For example, while the first use of a predator drone was 2001, it was not until much later that the distinct trauma experienced by drone pilots (e.g., Saini et al., 2021) and the population (Holz,, 2023) was identified and studied. The goal of this project is to prevent unintended strategic consequences when it comes to the integration of emerging autonomous AI within national security decision-making.