“I decided to take up the cause, trying to understand how AI could impact society.” In this blog post, Pontus Strimling at The Institute for Futures Studies, writes about how he his thrilled to see researchers sharing the responsibility of trying to understand AI impact.
It was in May 2016 that my journey into AI impact research started. I was visiting my friend Tim in London. I first met Tim when he was a PhD student in neuroscience, but he was now in London, developing AI for DeepMind. I thought it was just a friendly visit, but it soon became apparent that Tim had an alternative motive for inviting me.
AI going to change the world
When I arrived, he sat me down and brought forth a stack of papers. He gave them to me one by one, explaining breakthrough after breakthrough that they had done. They got AI to identify objects’ material and position, to handle soft materials, to find its way through labyrinths, to beat almost every game that existed; the list seemed endless. He pummeled me with breakthroughs until he was sure that I realized that AI was going to change the world.
Shaken by this insight, I started to look around for social science researchers dedicated to understanding this imminent change. Researchers, who had begun to describe the change or even better, investigate what could be done to ensure that the possibilities in this technology were harnessed and the pitfalls avoided.
However, to my dismay, I did not find much. The little there was tended to focus on labor market research. When it came to impact on soft values; social relationships, happiness, trust, etc., I found close to nothing.
How AI could impact society
Therefore, I decided to take up the cause, to shift my focus from my research on mechanisms of cultural change towards trying to understand how AI could impact society in the coming decade. The start was lonely, and I felt desperate for more social scientists to turn their attention towards this.
But slowly, I started seeing positive signs. Initiatives such as the MIT Computer Science & Artificial Intelligence Lab, DeepMind Ethics and Society, and The Stanford Institute for Human-Centered Artificial Intelligence, showed me that many were thinking about these issues and were creating infrastructures where they could be addressed. The initiatives were still far away from Sweden, where I, due to family reasons, was bound to stay for the foreseeable future. Until WASP-HS.
Thrilled to be a part of it
WASP-HS is, to my knowledge, the largest Swedish research investment ever made within the humanities and social science. It gives researchers the means to switch from their previous fields to focus in on how AI will affect society. And I am thrilled to be a part of it. While I don’t think WASP-HS alone will be able to build the new field that is research on AI impact, I think it could form the foundation on which the research area can grow.
I am thrilled to see more researchers sharing the responsibility of trying to understand AI impact, and I believe that if we all draw on our own expertise, we can tackle this together. As for myself, I am drawing on my previous research on cultural change to investigate which AI implementations are likely to spread fast and which will come slowly, if at all. Understanding the spread of AI implementations is crucial to understanding their impact and ethical consequences.
It can guide researchers within and outside WASP-HS to focus on the impact of the more imminent AI, and it can help society to choose; which AI implementations we want to restrict and which we want to boost. So that together, we can ensure a smooth transition into a future filled with only the AI we chose.
PhD position related to the project: Psychological and social processes that influence the diffusion of AI