Randomized Controlled Trials to Examine the Impact of Generative AI
Principal Investigator: Alexander Volfovsky, Duke University
Co-Principal Investigators: Christopher Bail, Duke University and D. Sunshine Hillygus, Duke University
Years of Award: 2024-2027
Managing Service Agency: Army Research Office
Project description:
Generative Artificial Intelligence (AI) is rapidly changing how we live, work, and interact with each other. Yet this powerful new technology also presents critical risks to U.S. national security. Experts warn adversarial nation-states or non-state actors will leverage generative AI to create fake images, videos, or texts designed to undermine democracy. We propose a four-year project that will produce a series of randomized controlled trials designed to understand the potential impact of social media influence campaigns powered by such new technology. We focus on the following questions: 1) Can social media users distinguish texts, images, and videos created by generative AI from those created by real human users? Which social media users are best able to do so, and under what conditions? 2) How does the presence of bots or users employing generative AI on social media impact trust in others and the information they share? Is information created by generative AI ever preferred to that created by humans? Finally, 3) What is the impact of generative AI on social cohesion (i.e. shared values and identities)? Can these new tools counter malicious influence campaigns by strengthening social bonds, or producing counter-messaging designed to discredit influence campaigns? Our project thus addresses fundamental questions about the social impact of technological change that are of relevance to national security (MINERVA Topic Area #4), drawing upon insights from multiple social science disciplines (sociology, communications, social psychology, and political science).