Blog Post

AI and Trust—Observations from the Swedish Welfare State

Published: December 3, 2024

As a WASP-HS PhD student it was an easy decision for me to join the study visit to Stockholm Sweden on 14-15 November, this year. This was my first trip to Sweden’s capital city and although I had limited time for sightseeing being immersed in Stockholm’s atmosphere, with its connection to Sweden’s government, was a fitting backdrop for my research. My research focus is on how the use of artificial intelligence (AI) is affecting and shaping Sweden’s public administration in terms of vulnerability and trust. This includes the relationship between government and citizens.

AI Disrupts Trust in the Government

Sweden has a strong tradition of trust between its government and citizens, and this is reflected in its administration. Yet, the introduction of AI systems complicates this relationship. These tools are expected to enhance efficiency and consistency, but they often lack transparency, fairness and risk depersonalizing processes which in turn risks creating new vulnerabilities.

Live Cases of AI-Tools in Public Administration

Throughout the study visit we engaged with Swedish public administration agencies that are using, or are planning to use, AI systems. These included Skatteverket (the Swedish Tax Agency), Ekonomistyrningsverket (Swedish National Financial Management Authority), Riksrevisionen (Swedish National Audit Office), and Utbetalningsmyndigheten (National Payments Authority).

The visit to Utbetalningsmyndigheten was particularly relevant to my work. As a newly established government authority, its primary task is to oversee welfare and identify fraudulent activity. Its formation and operational approach provided insight into how public administration must change and adapt when integrating AI systems.   

For example, we discussed how AI supplements the work of agencies like Försäkeringkassan, which historically has been responsible for evaluating social insurance claims, determining appropriate payments, adjusting when necessary, and recovering overpayments in cases of errors. Utbetalningsmyndigheten now conducts AI-assisted evaluations to identify payment recipients or transactions that are at high risk of being erroneous or fraudulent. However, Utbetalningsmyndigheten does not have the authority to stop or recover payments itself. Instead, it forwards its findings to agencies like Försäkringskassan, which must then act to recover over payments and correct errors.

Utbetalningsmyndigheten acknowledged that this process adds to the workload of agencies like Försäkringskassan and may potentially make Utbetalningsmyndigheten unpopular—not only with the public but also with other agencies that receive the error reports. They emphasized their awareness of the need to avoid overloading these agencies with excessive reports and stated that their goal is to ensure that flagged payments are clearly recoverable or demonstrably made in error.

This example illustrates how the integration of AI systems force adaptations in public administration structures and operations, often creating new complexities. It also highlights how such adaptations can complicate interagency trust, as roles and responsibilities are redistributed. The new structure affects trust between the government and citizens. Utbetalningsmyndigheten cannot stop payments or provide detailed information, leaving individuals with delayed or halted payments stuck in a bureaucratic loop, redirected between agencies without clear answers.

“A One-Size-Fits-All Approach Makes AI Difficult to Regulate”

Throughout the study visit I was once again reminded of the importance of viewing issues related to AI, from different perspectives, Conversations with the more technically minded PhD students in our group made it even more clear to me that AI systems often are narrowly tailored to one specific context. However, the same systems are often used for different contexts, each context adding new complexities that were not considered in the first place. A one-size-fits-all approach makes AI difficult to regulate. AI development for public administration needs to be grounded and designed for the context in which the AI system is to function.

AI systems that are tailored for the sake of their context are perhaps particularly important in the case of public administration. Sweden’s tillitsmodell (trust model) highlights the importance of trust in the leadership. Tillitsmodellen fosters clarity and mutual commitment while maintaining flexibility in organizational structures. By grounding public administrations in trust, it ensures adaptability and responsiveness to challenges without stiffing innovation and progress.

Complexity of Trusting Systems that Lacks Trust

A huge takeaway from this study visit is this: Both public administrations and AI systems are expected to follow protocols and be transparent, fair and accountable. However, humans bring personal traits like honesty and sincerity, which AI lacks. AI can only simulate transparency and explain ability, making it harder to build trust in these systems. I find it interesting to think about how, or if, we can trust AI when it does not have the personal qualities that usually help build trust.

Author

Timothy York
PhD student at the Department of Sociology of Law, Lund University

Recent Blog Posts