Pedro Sanches, Assistant Professor in Human-Centered Artificial Intelligence, Department of Informatics, Umeå University, shares his speech from the Mapping of the Grand Challenges of AI session during the WASP-HS AI for Humanity and Society Conference 2022.
Artificial Intelligence is the dream of understanding, recognizing, and reproducing human intelligence in artificial bodies. Like a dream, it is subconsciously influenced by our desires and anxieties, and, like a dream, it can deeply impact our daily lives.
This dream is not new, but it keeps reinventing itself. In the early days, AI researchers invented algorithms that plan paths, databases that encode knowledge from human experts, and linguistic rules for how we communicate with one another. Those ideas spawned route planners and driving instructions. They gave us web searches, spam filters, product, film, and music recommendations. The list goes on. Besides those algorithms, hidden in products, invisible and embedded in our daily lives, the AI dream also gave us machines with physical bodies. Industrial manufacturing plants, robots that can vacuum clean and mow lawns, or even some of the toys we give to our children.
Nowadays, many people, and especially outside of academia, do not recognize those things as AI. They are mundane. Static. But we mustn’t forget that just a few years earlier they were called “universal automation” or “superior intelligence”. This may not be true for academia, but in the general discourse, we tend to reserve AI for things that are new. Things that keep the dream alive. And new things are made all the time. We made deep networks that can recognize and reproduce patterns in data—as long as there’s a lot of it—, with a flexibility we’ve never seen before. In awe of these, many started looking at which human tasks are about recognizing and reproducing patterns. Turns out, a lot of them. Managing, diagnosing, manufacturing, even creating, writing, and designing, all of this involves a great deal of seeing patterns, and reproducing them in clever ways.
Of course, dreams can also be nightmares.
Depending on your perspective, it is perfectly legitimate to be concerned about being replaced, having your everyday life even more affected by surveillance, bureaucracies becoming even more obscure, or as we usually say when there’s AI involved: black-boxed.
Depending on your perspective, you may also consider those nightmares to be overinflated. Some in Computer Science may scoff at the idea that robots will take over jobs, much less the world, when they struggle with some of the simplest tasks outside of perfectly specified parameters.
Depending on your perspective, you might also say that true intelligence is far from data reproducing itself through clever mathematics.
Different perspectives can all be true.
Perspective is exactly what is missing from the general discourse in AI. Its crucial oversimplification. Because even if the dream is for AI to serve humanity, different humans have different levels of access to the AI dream. The humans in “human-centered AI”, who are they, really? Are they only the humans interacting with the decision-support interface that need a transparent system that can be trusted? Or should we also care for the humans in the mechanical turk labeling the data, those who care for the servers, those who appear in the datasets, or those who are invisible?
In general discourse, we’re missing the nuanced understanding that technologies, the social world and dreams are co-shaped by one another. And that not everyone is able to participate in the AI dream.
It is of course a matter of power, figuratively and literally. The power to aggregate large amounts of data, at a scale that has not been seen before. The power to mobilize people to design and implement technologies that serve specific aims. The power to decide what counts as data, and what remains invisible to the system. And the electrical power that all of this requires.
As academics, it is our job to permeate the public discourse with AI descriptions richly embedded within the network of actors that dream them, the material and human environments that influence them. We also need new stories and technologies that mobilize alternative material, cultural, and relational dimensions. To allow a wider variety of perspectives and livelihoods to seep into the AI dream.
In WASP-HS, many of us are academics working across disciplines. It is often the case that we talk about different things, when we talk about AI. The WASP-HS network on “Somatic and existential ethical engagements with AI” served as a forum for some of these discussions to happen, and also start thinking of ways of communicating knowledge across academic and societal language barriers. We figured out, for example, that critical storytelling and future-making should be part of our toolbox as researchers, to have impact beyond our own studies. We should engage more with the notion of AI in the general discourse, of how it gets portrayed in media, but also how AI is conceptualized in different disciplines.
In this blog post, I can only speak from my own perspective, but to move forward, we need to engage multiple perspectives, and allow their stories to be told. The goal should not be to find a consensus for what AI is, and what kind of future should it bring, but rather to celebrate dissensus, and hold space for multiple, possibly conflicting, futures to co-exist.
A lot of the things that we call AI today, what will become of them in just a few years, when the dust settles? What will they be called when they disappear, integrated into our everyday lives? A spreadsheet automation button, a shape-changing garment, a prompt-based image search? Probably not. Probably something else entirely. But it is important to notice and see how they get adopted and domesticated. Because for them to be mundane in a future world, everything around them will have changed.