Blog Post

AI and Politics Roundtable: Epistemic and Ethical Articulations Remain a Key Challenge

Published: November 1, 2022

WAPS-HS PhD student, Pasko Kisic Merino, at the Department of Political Science, Karlstad University, shares his experience from the WASP-HS Community Reference Meeting AI and Media and the roundtable AI and Politics.

On the 13th of October a group of experts, researchers, and PhD candidates gathered at Umeå University to discuss the intractable interstice between politics and AI. The challenges in studying this space reside not only in the topical peculiarities of the fields – superficially preoccupied with traditional approaches to the political and the technological – but also in the epistemic, ethical, and ontic dimensions of said connection, and its real-life consequences. Simon Lindgren’s opening for the highly multidisciplinary panel – including scholars from political science, sociology, mathematics, and computer science, as well as experts in game design and industrial AI development – summarised one of the key technopolitical concerns: “How can we recombine critical analysis with a more industrial approach to the study of AI?”,  “should this recombination occur in the first place?”, and “How can research approach these issues critically without outright condemning AI?”. While AI is analysed from several perspectives in the field of political studies – including automation and labour replacement, methodological development, public systems and services, evaluation and standardisation systems, and electoral campaigns – our discussion was more or less focused on one of the most salient manifestations of these technologies: social media. In these platforms, AI technologies are used in several ways, from simplifying user interfaces to – most critically – deploying algorithms that curate feeds and track engagement to artificially produce actants (e.g., bots) and content that eventually allow for the manipulation of subjects and collectives.

The challenge presented by Simon does not involve solely the jagged “distance” between critical scholars and AI developers, but also the painful limitations of the fields that traverse said interstice, for instance, those of computer science, law, communications, and ethical AI. The extensive work developed in these fields of study has allowed highlighting (more than solving) key questions on the fickle location of the relationship between political phenomena (for instance, the 2021 US Capitol Storming) and the development of AI (e.g., in Facebook, Twitter, and Gab feeds that maximised users’ engagement and emotional consumption of far-right conspiratorial and insurrectionist content). In the context of a nebulous “front” of techno-pessimists on the side of critical studies (including yours truly), Simon posed some questions that encapsulated and grounded the aforementioned epistemic, ethical, and ontic challenges: “are AI developers supposed to do nothing then? Should AI development simply stop given the effects on the sphere of politics?” A general agreement in the panel was that, despite the unanimous concern about the development and relationship between AI and politics (i.e., locating most participants on the “critical” side of the spectrum), AI and its development should not be simply stopped or avoided. For better or worse, just like your conspiracy-theorist uncle at Christmas, these technologies are constitutive of our larger and increasingly intermingled patchwork imaginaries – our techno-body politic.  

To animate a more thorough discussion, Simon proposed that the relationship between politics and AI, as real-world phenomena, can be understood in two ways. First, we can see politics driving AI, which triggers broader questions about how and which discursive articulations frame technological development and our psychosocial interpretations of it. As I will develop in more detail later, an example of “politics as drive” is how Silicon Valley libertarian ideas (and ideals) of free speech, free markets, and techno-solutionism congregated to frame the development and deployment of AI in social media that favour the distribution and consumption of highly-polarising and controversial content online. Second, politics can be understood as within AI, in which the latter can be seen as a vehicle for ideology in a subtle, even internalized fashion. This occurs, for instance, when replicating white supremacist and nationalist content in social media out of “fairness to all sides” or justifying the use of minority-targeting AI technologies in surveillance as “neutral”. Given the oft-baffling techno-solutionist “ethical safeguards” checklists involved in AI development and deployment, the need for integrating critical studies into AI becomes evident regardless of the specific location and role of politics in both processes. Several comments further problematized and expanded the “drive v within” conundrum.

Some of these ideas focused on how the “driver” politics of AI are harmful due to being developed in a vacuum of confirmation biases rather than focused in practice within the framework of democratic principles. A practical case of this “driving” issue is that of Silicon Valley and the Zuckerberg/Dorsey/Andreessen libertarian “tech-bro” culture, which is now being turbocharged by Elon Musk’s “free speech absolutism” discourse. According to the participants, these types of cultures generate vacuums that do not allow for a clear view of the effects of e.g., Silicon Valley politics, on society. This issue tends to be reinforced by a couple of dynamics. First, that of the AI developer as an “innocent bystander” that complies with the ethical guidelines yet only interacts with them as “guidelines” or “justification checklists” that need to be solved to appease scrutinizers. Here we can see how techno-solutionism is not only present in the development of technology per se but also deeply entrenched in its co-constitutive ethical and social dimensions. Second, Asad Sayeed pointed to the issue of the still atomised conceptualisation of AI itself and its many different types and applications. The inherent complexity of AI as an umbrella concept and the politics surrounding it points to how AI can be understood as a term that is not only in constant flux, but, as Simon Lindgren posited, also as a type of empty signifier. In discourse theory, an empty signifier is that whose meaning is unfixed, wrestled, and populated by contesting social actors to attempt at establishing hegemonies (for instance, Silicon Valley executives, GDPR scholars, or the European Data Protection Board each prioritising and rhetorically treating the term “AI” in a particular way). This latter issue was also tackled by Sophie Mainz, who argued that the “driver v within” conundrum could greatly benefit from problematising the mismatch in communications in the framework of techno-social governance (which, critically, follows issues of conceptual interpretation) between AI developers and regulators. Emelie Karlsson built on this line, arguing that while critical scholars and institutions can achieve some modicum of change or awareness regarding AI technologies in politics, the sheer speed and volume in which AI is developed and implemented in every facet of public life makes it a daunting, multi-layered problem that only increases opacity and thus hampers our ability to relate with technologies and each other in a healthier fashion.

The discussion of politics as a “driver” or “within” AI was further abstracted by Malin Rönnblom. In line with previous criticism of Silicon Valley politics and some predominant scholarship on ethics and AI, she argued that there is a deeper difference in how we address the effects of technology in society if we dwindle the yoke of individual-focused ethics in favor of social-focused frameworks. The question of where is the collective interest and well-being in the framework of ethical AI is still sorely missing from much of the (well-intended) literature that tries to apprehend how politics drives or is embedded into AI. Echoing Sophie Mainz, Malin argued that this collective dimension can be thought of from an institutional and regulatory perspective – so a third co-constitutive dimension of politics in AI would be that of democratic regulation. In addition, she argued that we require locating politics also in the realm of belief and emotions – especially those linked to notions of “success” and “efficiency” in public policy. Under this framework, the relationship between politics and AI transcends the “driver”/ “within” dichotomy and instead jousts for an (enjoyable) headspace in our collective policymaking imaginaries while also integrates deeper layers, like that of emotional governance.

While it was not possible to incorporate all contributions to this fantastic roundtable in this brief post – including takes on technological neutrality, behavioural influence, AI in popular culture, and relationships with older media – the questions and motivations around the study of AI from a critical (political) perspective should perhaps be discussed with a focus on relational locations of contestation. Answering the main question of the roundtable – how do we merge critical and industrial approaches to the study of AI? – requires us to explore not only the locations in which different actors compete for fixing meaning to “politics in AI”, but also the articulations between the four types of relationships – drive, embeddedness, regulation, and emotions. Approaching these locations and articulations will in turn require substantially improving the existing solutionist-dominated standards for reflecting on the role of multi- and inter-disciplinarity in the study of technology and AI in politics and society, while simultaneously keeping (our) techno-pessimism in check.

Author

admin

Recent Blog Posts