Blog Post

The Swedish AI Commission’s Strategic Roadmap Dodges Question Zero

Published: December 6, 2024

Last week, the AI Commission in Sweden submitted its report, Färdplan för Sverige, detailing a national roadmap for artificial intelligence (AI) engagement. We were happy to see nuance in bits of the reasoning, as well as some promising suggestions, not least in the area of research and education. Question Zero is, however, noticeably absent in the report.

We currently see three main narratives in the public discourse on AI: The speed-blinded perception that technology is the primary force shaping society and culture, overshadowing human agency and social structures. Then there is the belief that any social area or problem, no matter how complex, can be fixed through technology and innovation. Finally, we see an increasing reliance on tech elites, doom prophets and utopian evangelists when it comes to navigating the future of our societies. Together these views reinforce the idea that technological advances, once set in motion, follow an inevitable trajectory, beyond the scope of human influence or control. 

It is time we zoom out and see that technology is not an uncontrollable force like the weather; it does not happen to us—we create it, shape its course, and have the power to make choices about its direction and impact. We must pose Question Zero: What are we trying to accomplish with AI, and why?

The Risks of an AI-First Mindset

The report echoes an accelerationist fear that Sweden will fall behind if it does not act immediately. An unsettling risk appears to loom that the nation risks becoming a mere bystander, as global forces wield this technology to reshape Swedish society in ways we can’t control. The dazed prioritization of swift action at the cost of broadened perspectives, and without learning from history, is eerily familiar as societal crises generically give rise to feverish discourses of heightened emotions. In most historical cases, for example, the rise of the internet or the introduction of nuclear power, initial responses were often driven by fear and urgency rather than careful deliberation. The potential problem with this type of de-dramatizing reasoning, however, is that we never know for sure–when we are in the midst of a transformation–what the right level of urgency or fear is, if any. We will only know that in hindsight.

Irregardless of that, however, it is easy already now to realise the dangers of thinking in terms of AI-first. That mindset and strategy are rooted in the narrative that technology is shaping society and will help us fix it. This promotes the notion that complex social problems can be ‘fixed’ by merely adjusting certain parameters. Such approaches obscure the deeper-running unfairness in society with potentially devastating consequences. There are always multiple ways of defining, describing, and approaching social problems and, just because they are urgent, it does not mean that new, comprehensive, and seemingly efficient technological solutions automatically become legitimate. Everything that can be rationalized should not necessarily be made so; because such is not always the human condition. We move down a very dangerous path if we see technology as something that can ‘fix the bugs of humanity’. As such, when we start from the idea that AI is the solution, we are not sufficiently looking to investigate the real problem. Without understanding the problem, any AI solution is highly unlikely to succeed and can certainly cause more harm than good.  

The Real Costs of Tech Imperatives 

The importance of asking Question Zero becomes even more evident in relation to concrete real-world scenarios, such as the use of AI in the public sector. If we take health care as an example, the Commission’s report can be read as a justification for the current industrial policy on AI. The public sector investments in the private sector seek to reduce risks they may face in developing the technology. This is to best facilitate Sweden’s global competitiveness in various AI applications. That’s not to say that there won’t be significant benefits for specific areas of public health if these technologies come to fruition. However, debate on if, how and in what areas public funds should be invested in AI is a democratic imperative. 

Public participation, starting with whether we need AI at all in each given context or scenario, also serves to normalise transparency in the use of AI systems, and it mainstreams certain values into the technology. The Försäkringskassan debacle is a recent example of why this kind of critical thinking is so desperately needed. Försäkringskassan has been using an AI system to detect fraud in some child benefits. Investigative journalists found that this system discriminates against women, people of an immigrant background and those with lower educational attainment. This, then, has clearly had real-life impacts on people as their benefits have been delayed following unwarranted investigations into their claims for assistance. While this in and of itself is unacceptable, at a large scale the Försäkringskassan’s response, or lack thereof, has been incredibly worrying. When confronted with years of requests for transparency in their use of the system, they refused to provide information on the costs, purpose, design or outcomes of the AI system. This is very troubling and will likely undermine public trust in Försäkringskassan. Further to this, their vehement defence of their AI solution means that there is no space remaining to discuss if the technology actually was the best solution to the problem to begin with, and how public funds have been allocated in pursuing that solutionist strategy.

Reframing the Roadmap: Centering Critical Thinking in AI Strategy

To be clear, we are not rejecting the Report’s findings and recommendations as a whole. As stated initially, we indeed like its relatively balanced view on risks, and its strategic thinking around research and education. Still, all of this is presented in a way that dodges and obscures Question Zero as a crucial first step. No matter how big the hype, and no matter how strong the pressure from global capitalist corporations, politics must always first fully define the problems to be addressed, and next consider what is the adequate tool set. Only sometimes, not always, AI is the answer. An excellent way of summarizing all of this can be borrowed from an unlikely source – namely, avant-garde artist Laurie Anderson’s meditation teacher:  “If you think technology will solve your problems, you don’t understand technology and you don’t understand your problems”.

Author

Simon Lindgren, Jason Tucker, Virginia Dignum

Recent Blog Posts