Author

admin

Blog Posts

Sweden needs to take a lead in humanistic and social scientific perspectives on AI

Published: March 8, 2020

Amanda Lagerkvist, Senior Lecturer at Uppsala University, is project leader in the WASP-HS project BioMe: Existential challenges and ethical imperatives of biometric AI in everyday Lifeworlds.

Debunking myths about the technological sublime is old news for anyone in media and communication studies. Qualified problematization of media constituted the very birthright of our discipline one hundred years ago, and sits at the heart of our training as media scholars.

Indeed, such a knee-jerk reaction – to always politicize, sociologize, historicize and contextualize ‘new’ media technologies and their consequences – is both welcome and much-needed also for the current hype and hysteria of AI. And yet, moving beyond myth is only the beginning, and it is not enough. We also need to provide perspective, to bring visions to the table for a critical and creative engagement with these developments. This is precisely what we in the WASP-HS project I’m heading BioMe: Existential Challenges and Ethical Imperatives of Biometric Artificial Intelligence in Everyday Lifeworlds, hope to deliver.

In my previous role as Wallenberg Academy Fellow, I inaugurated a new international field of study about what it means to be human in the digital age and in a time of increased automation: existential media studies. The field revisits classic existential themes in the timely context of increased digitalization and automation of the human lifeworld to raise concerns of digital-human vulnerability and responsibility (Lagerkvist 2016, ed. 2019). The approach also includes crossing boundaries in research, stepping out of comfy zones and integrating sectors in society. This is not the time for remaining idly specialized and secluded in different traditions of science and scholarship. The impending transformations – both materially and symbolically – are massive, have global implications and they are across the board. They call on us as sincere scholars and scientists to start talking to each other about what we want AI to be, what we as citizens and as a species need it for, and if there are or should be any no-go zones for automation.

For me, WASP-HS with its broad and integrative ambition, is therefore in many ways a dream come true. For quite some time I have been arguing that Sweden needs to step up and take a lead in providing humanistic and social scientific perspectives on the development, and assess risks and opportunities through harnessing the rich and critical lenses provided by our domains of expertise. In other words, we need to examine what I have called the ethos, experiences and ethics of living with automation in today’s world (see Hub for Digital Existence) through daring and far-sighted interdisciplinary research approaches, that include the humanities and social sciences.

Firmly rooted in media and communication studies – and with the objective to take existential media studies in new and urgent directions – our new project on the existential consequences of face and voice recognition and sensory data capture, aims to do so via interdisciplinary prowess, intersecting expertise also in law, information systems, philosophy and artistic research as well as media archeology, digital anthropology, disability studies and queer theory.

Furthermore, I maintain that we cannot outsource, or be content with having delegated issues of ethics and consequences of AI, to excellent institutes for the future of life and humanity abroad (for example Future of Life). We need a conversation here and now. By consequence, we are now setting up the Human Observatory for Digital Existence, in a continuing partnership with the Sigtuna Foundation. The Observatory will be a place for such interactions across a range of different domains of society – from for example the business world, to the public sector, the civil sector, and the art and museum world. It will also be one of the key sites for our collaborative research agenda and fieldwork. Other stakeholders include our partner Chalmers Artificial Intelligence Research Centre (CHAIR). The cutting-edge research that is so important in this area, will benefit from these exchanges with both society and those who build the systems.

Importantly, collaborative research does not preclude the crucial independence of our research in relation to forces that argue that there is an inevitability to these developments. That it is in the offing that AI will inexorably evolve and take over and outsmart us, as a law of nature, is of course not true. Like any technology, it is built by humans, for someone’s benefit and with particular human goals in sight. But this is also why engineering cannot and must not proceed without its core foundation in the humanities! And why we need people in the humanities and social sciences engaging these matters.

It is heartening that WASP-HS (the Wallenberg AI Autonomous Systems – Humanities and Society program) has chosen to emphasize the humanities in its name. This is both crucial and radical. The academic world has rarely seen the humanities centered, spotlighted and invited in this way, especially in STEM dominated contexts and times (STEM = science, technology, engineering, and mathematics). It is a unique endeavor indeed.

As scholars in the humanities our task and assignment from society is to always ask uncomfortable questions, to be against the grain and to provide context and deep insight. This is who we are and now we’re invited to the party! But today this also means that the humanities must change and become generative (Ekström & Sörlin, 2012): that is, it is incumbent that we take part in formulating before the fact, together with AI researchers, the stakes at hand.

Are the concerns of WASP-HS already a field? Certainly. Within the disciplines and fields that are specialized in and specifically conversant about media technological change, such as for example media and communication studies, human computer interaction, information systems, internet research, critical data studies, the philosophy of technology and science and technology studies, a humanities and social sciences approach to AI already exists since some time back, often in intersecting subfields.

What WASP-HS makes possible is for those of us who are committed to this as our main vocation, to continue to go deeper. It also means to be given the time needed to make an impact on this international research debate, on how AI builders think and in turn on how society engages with and legislates around these significant developments. This is what any funding from the Wallenberg Foundations require and enable: excellence in research, boldness and ingenuity in pursuit, and a strong focus and vision. As Göran Sandberg has put it: “Do good science, nothing else” (The Royal Swedish Academy, WAF Mentorship Program, September 11, 2014).

The ethics debate is urgent but it is also fraught with complexity and vested interests. It has recently been argued that “ethical AI” for instance, is a project formulated by corporate interests to ward off governmental interventions through policy, read more at The Intercept: The Invention of “Ethical AI”. By contrast the emphasis on “responsible AI” (Dignum, 2019) and “human-centric AI” (the European Commission, 2018) is more in line with our project’s intentions. As Charles M. Ess argues (2020), ethics must also stem from and be for “the rest of us” in a demotic sense, and not be solely about principles formulated by philosophers and specialists.

We are all ethical and existential beings, and a future in which we become human with technology, is ours to take on, forge and critique – together.

Recent Blog Posts