Amanda Lagerkvist, Principal Investigator of the WASP-HS project BioMe: Existential Challenges and Ethical Imperatives of Biometric AI in Everyday Lifeworlds, reports from the research project and on the challenges faced by the pandemic.
“BioMe: Existential Challenges and Ethical Imperatives of Biometric Artificial Intelligence in Everyday Lifewords,” is a project in the Department of Informatics and Media at Uppsala University, headed by Professor Amanda Lagerkvist. It is part of the research environment Uppsala Informatics and Media Hub for Digital Existence. Apart from Amanda Lagerkvist, the team consists of Dr. Jenny Eriksson Lundström (co-PI and researcher); Dr. Matilda Tudor (postdoc and research administrator); Dr. Jacek Smolicki (affiliated potsdoc), Maria Rogg (WASP-HS PhD Student in Media and Communication Studies) and Professor Emeritus Charles M. Ess (ethics advisor). This fall we are also happy to be able to include Styliani Theocharidou from Maastrich University who is doing her research internship as an assistant within BioMe.
The twofold purpose of BioMe is:
- To investigate how people live with automation focusing both on the existential possibilities and risks of increased digital-human vulnerability, as our embodied existence and everyday lifeworld become ever more entangled with biometrics.
- To identify ethical imperatives (related also to diversity issues) in for example smart household assistants (voice recognition), pre-emptive policing and contactless security points (face recognition), health apps, touch screens and biohacking (sensory data capture).
In order to do so, the project zooms in on different sites within the human lifeworld where challenges and imperatives implied by biometric automation are felt in lived experience within intimate as well as public domains and among professionals, activists, artists, and particularly vulnerable groups.
Our project builds on and furthers the field of existential media studies (EMS); previously initiated and successfully established by PI Amanda Lagerkvist as Wallenberg Academy Fellow (2014-2018). EMS stems out of her related research projects, also funded by the Wallenberg Foundations, resulting in several international conferences and landmark publications. (Find out more here about key publications in the field: https://www.im.uu.se/research/hub-for-digtal-existence/output-and-publications-in-existential-media-studies/ ). EMS can be defined as, in a nutshell, revisiting “what it means to be human” in all our diversity and in our common humanity and shared vulnerability in the digital age. It further remaps media/digital culture and AI in light of existential philosophy’s key themes and concepts – while upgrading these to our contemporary technologized culture. It reconceives media as an existential terrain that needs to be navigated and of ‘media users’ as coexisting beings: a ’coexisters.’ It thus refigures media as existential media, introducing their suggested key properties and forms (See Lagerkvist, forthcoming 2022).
Like so many others, the BioMe project needed to reorient itself due to the restrictions and limited possibilities for on-site empirical studies, during times of Corona. The planned fieldwork with for example travelers and the border police at airports (which were emptied of people), or with children at Tekniska museet (which was closed), and the booked activities with partners and stakeholders had to be postponed. We have instead been able to put all of our energy into the theoretical foundations for our studies of biometric culture. Besides thorough work with the ethical framework of the project, we have also had the unique opportunity to establish a solid philosophical platform shared among all team members before entering into the field. As will be further detailed below, this has been done through numerous joint literature seminars, distinguished lectures with world-leading expertise from the field, discursive case studies, co-authored academic publications including an ethics manifesto, an open seminar series soon to be launched, and active participation in the public debate. Hence, the pandemic reversed the order of the project plan. As will be illustrated ethics lays the foundation for the fieldwork ahead in a deepened sense and we now feel very well on track to contribute with new empirical insights to this urgent area of knowledge.
The Fundamental Question of Ethics
Ethics is at the heart of BioMe. Not only are we working with sensitive topics and vulnerable groups (such as immigrant minorities and people with disabilities), but the project as such is strongly motivated by ethical imperatives. The fundamental question of ethics has taken us in two directions of prioritized engagement so far.
- Ethical Review
First, the project and all its sub-studies were subject to ethical review. We submitted a highly elaborate application in the fall of 2020 to Etikprövningsmyndigheten, without having to compromise with any of the vital areas we had set out to investigate for truly human-centered AI ethics. This process resulted in 80 pages of detailed study designs and numerous different consent and information documents and interview guides for each sub-study. The application was approved in December 2020. While this was a time-consuming and challenging process – especially under the given circumstances – this experience ultimately turned out to be very rewarding and productive for the team, consolidating a necessary common basis for ethical conduct and a common ground for research ethics among us.
- Continuing work and output on ethics
The Hyper Human Installation
For example, with this as our solid base, we are equipped to start exploring the actual ethics of biometrics. An initial way of doing that was to take an initiative to set up an installation for the exhibition Hyper Human https://www.tekniskamuseet.se/en/discover/exhibitions/hyper-human/ at our partner organization Tekniska Museet. We were granted to borrow technology from the computer vision firm Visage Technologies https://visagetechnologies.com/ which produces face recognition AI. Through the installation, the visitors are confronted with face recognition technology and based on this the kind of assumptions made by the computer about them in terms of age, gender, and mood. A number of ethical concerns are raised for the visitor. “Do you trust the computer to know your true self?” “How much can your face tell about you”? In the upcoming fieldwork, now that the museum is open again, the installation will serve as a starting point for workshops arranged by the project members. The museum is our field floor, and we are researchers in residence for the coming four years.
The BioMe Ethics Manifesto
Inspired by what we learned through the process of receiving ethical clearance we also started working on an ethics manifesto for biometric automation, contemplating our sites and cases of investigation. Here we argue for human-centered ethics beyond the liberal humanist subject and beyond AI solutionism, identifying the stakes raised by contemporary human/machine entanglements. These stakes pertain to increased objectification of the embodied, situated singular human being, and involve for example infringements of bodily integrity and the loss of human judgment (the gut feeling) in decision making. We further argue for safeguarding the existential need for obscurity in the face of the current ideology of full transparency. The manifesto is currently in review with an academic journal in the field.
Ethics Outreach: Talks, workshops, and seminar series
Building on this, Amanda Lagerkvist will soon give a talk during the half-day Workshop on Ethics, AI, Technology and Society at Världskulturmuseet in Gothenburg, organized by our partner CHAIR at Chalmers on October 13. Our dialogue with developers and AI-researchers will continue through joint activities in the near future: https://www.varldskulturmuseet.se/kalendarium/program/ethics-ai-technology-and-society/ Another concrete output of our ethical work is the seminar series we have organized together with Teresa Cerratto Pargman and her WASP-HS team, within the framework of WASP-HS, called Undisciplined AI Ethics. There are four seminars planned for the fall of 2021 where all projects members from BioMe are enrolled in different ways. The series will be running throughout the semester starting in October: https://umu.zoom.us/meeting/register/u5wuduCprz4vHtAHma7J5dTOvbt_U4n6mBOr
Theoretical Foundation and Discursive Work
Besides consolidating our ethics, we have during the year been heavily involved in theory development. Our key framework, existential media studies, has been enrichened by investigations into automation in general and biometrics in particular, resulting in a number of publications.
Publications
Amanda Lagerkvist’s forthcoming monograph Existential Media: A Media Theory of the Limit Situation (New York: Oxford University Press, 2022) integrates findings and insights from work on death and digitalization with new thinking on biometrics, automation, and AI. The book introduces the field of existential media studies by revisiting existential philosophy and assessing its lineage, and by focusing specifically on a reappreciation of Karl Jaspers’ philosophy, and of his concept of the limit situation. Theorizing limits for media studies, one key argument of the book is that the present age of deep techno-cultural saturation, and of escalating calamitous and interrelated crises, is in fact a digital limit situation, in which there are profound stakes that heighten existential uncertainty, vulnerability as well as potential fecundity. Existential Media also calls for a different ethos that powerfully challenges ideals present in AI imaginaries of limitlessness, quantification, and speed and seeks out alternate intellectual and ethical coordinates, imagining a responsible future with existential media.
Another key publication from the project is Amanda Lagerkvist’s theoretical article from November 2020: “Digital Limit Situations: Anticipatory Media Beyond ‘The New AI Era.’” Journal of Digital Social Research, 2(3), 16–41. https://doi.org/10.33621/jdsr.v2i3.55 published in a special issue entitled “Unpacking the Algorithm: Social Science Perspectives.” Here Lagerkvist scrutinizes the ‘inevitability myth’ of AI-driven futures and argues for the necessity of imagining a more inclusive and open future of existential and ecological sustainability for AI design.
Matilda Tudor’s article “Queering Digital Media Spatiality: A Phenomenology of Bodies Being Stopped,” forthcoming in Feminist Media Studies, specifically serves to diversify and bring other margins to the table for future work in EMS. By calling attention to the disorientation experienced by minority groups in face of stigma, and how it can be understood as a particular kind of digital thrownness, she argues that perspectives from the margins must show the way for the ethics of automation.
Another discursive case study is our essay, “Sonorous Surfaces, Biased Backends: Existential Stakes of Gendered AI – the Case of the Voice,” written by Amanda Lagerkvist, Matilda Tudor, and Jacek Smolicki. The essay is part of the collection Media Backends: Digital Infrastructures and the Politics of Knowing (contracted by University of Illinois Press and edited by Lisa Parks, Sander de Ridder, and Julia Velkova), and offers a historical and philosophical analysis of the gendering of voice assistants. Questioning what it means to put female voices at the frontends of our automated companions, the essay argues that because our humanized machines can never be fully human, they may serve as privileged entering points into fundamental existential concerns. Through the female machine voice, ideas that belong to an ideological backend about gender, race, and sexuality – which are intimately entwined with imaginaries about what it means to be a real human – are actively reproduced.
Other publications from BioMe – published, forthcoming, under review, and in progress – can be found at our webspace: Uppsala Informatics and Media Hub for Digital Existence.
BioMe seminars and guests
From the very outset of the project, we have organized a series of BioMe text seminars, during which we have read and discussed theoretical, philosophical, and empirical work related to existential media studies and biometrics. Seven very rewarding zoom sessions covering different areas took place over the year, sometimes also with invited speakers:
#1 Quantification
#2 The Voice
#3 Critical Data Studies I: Key texts
#4 Limits (DIGMEX Lecture with invited speaker: Dr. Mahmod Keshavarz, UU, presenting his book on the design politics of the passport) (A joint venture the BioMe project, the DIGMEX network and the IM Research Seminar)
#5 Critical Data Studies II: Diversity
#6 Ethics and Beyond (Workshop with Charles M. Ess, University of Oslo)
#7 What is Biometrics? (Seminar with invited guest speaker Bthaj Ajana, King’s College London)
Media and Outreach
Throughout the year, we have participated in public debates on a number of occasions.
- In the article “Ansiktsigenkänning – vem ser dig?” for the magazine Forskning och Framsteg, 2020) Amanda Lagerkvist commented on the initiation of face recognition technique within Swedish police investigations.
https://fof.se/tidning/2020/9/artikel/ansiktsigenkanning-vem-ser-dig
- Jenny Eriksson Lundström participated in Radio Sweden (SR) where she was interviewed by Ekonomiekot Extra about Amazon’s establishment in Sweden and their use of biometric AI https://sverigesradio.se/avsnitt/1574852
- Matilda Tudor, participated in a workshop on artificial intelligence (AI) and gender equality arranged by Vinnova. The workshop drew together expertise from academia and industry with the central aim to map out possibilities to use AI innovation to help realize the Gender Equality Policy of the Swedish Government.
- Jenny Eriksson Lundström has been interviewed in Alecta’s magazine on AI in work places.
Post-Covid Prospects
As we are now entering into a new phase of the project, post-covid (we hope), activity within all branches of the BioMe tree is thriving. Fieldwork has been initiated within all sub-projects, and we will continue recruiting throughout the fall. We are planning for a project activity in March 2022 drawing together both our partners at CHAIR, the Human Observatory for Digital Existence consisting of stakeholders and experts from outside academia, and Tekniska Museet. Parts of the event will be open to the general public.
On May 31 –June 1, 2022, we will arrange the third international Digital Existence conference Digital Existence III: Living with Automation at Sigtunastiftelsen. Day 1 of the event will be open, mainly to faculty and other interested parties and network members. Attendees will have to register through BioMe (with the caveat that spaces might be filled). Day 2 will be limited to project members, invited speakers, special guests, and our Advisory Board, for focused discussions on the project. (Confirmed keynote speakers include N. Katherine Hayles, Benjamin Peters, Sun-ha Hong, Joanna Zylinska, Sarah Pink, Kelly Gates, and Zach Blas).
Overall, the pandemic has no doubt brought our concerns as existential media researchers to the forefront of the collective consciousness, as it has exposed us to our shared (while still unequally distributed) vulnerability as human beings, and our reliance on technology. For sure, much more work is needed to make sense of what we have all experienced while cut off from near and dear ones, sharing the most intimate and profound moments over screens and wires. Stay tuned since the BioMe team will continue to make headway, champion, and lead the development of a sophisticated existential media analysis of these matters.