AI and Political Communication
The goal of our project is to improve the current understanding of how AI and other autonomous systems reshape political communication. We interpret both AI and political communication in a rather broad sense. The former may include everything from political social media bots to various recommender systems that determine what you see on platforms like YouTube. The latter includes any form of communicative practice that is political in nature, not just that of established political parties, but also more informal ways of excreting power in the information society. This breadth lends itself to a variety of sub-projects. However, we are currently placing our main focus on emerging forms of visual communication, such as deepfakes, which offer novel opportunities for political satire, but may also reinforce existing power asymmetries and potentially even pose a threat to democratic discourse as such. Our approach is both empirical and theoretical, drawing on quantitative content analysis of deepfakes and other visual digital artefacts, as well as resources from information ethics and classical political theory.
Our hope is that, by the end of the project, we will have a much deeper understanding, both empirically and theoretically of the ways in which political communication is automated, and how it is shaped by the technologies that mediate it. Let me give an example. Both academic and popular discourse on political bots (and instance of AI and political communication) is largely predicated on a clear-cut distinction between “authentic” and “inauthentic” political actors (i.e., bots). Yet, increasingly we make observations that do not fall neatly into either of those categories. Consider the 2017 UK election, for instance, when a group of Labour party activists called on young left leaning users on the dating app Tinder to donate their accounts to the activists during the final week leading up to the vote. Upon doing so, the team would connect the user’s account to a chatbot that would swipe right (indicating attraction) for every other user it encountered. When matching with others, the bot would ask for political affiliation, and begin persuading those endorsing conservative candidates to reconsider. The bot is believed to have sent around 30,000-40,000 messages in the Dudley North battle ground constituency alone. Though a remarkable case, this is likely a prototype of future applications of AI in political communication. Is this authentic communication? The users made an authentic choice to donate their accounts, but the texts they sent were not literally sent by the users themselves. It is hoped that our project will help assess the extent to which similar initiatives are likely elsewhere. How willing are people to donate their online personas, and to what ends? Moreover, how are we to understand and measure politica participation if political content posed by real and verifiable social media accounts cannot be trusted? And how is this situation any different from other forms of political participation? These are the types of questions we hope to be able to answer by the end of the project.
Principal Investigator(s)

Carl Öhman
Assistant Professor, Uppsala University