Author

admin

Blog Posts

Mind the Gap: AI as Equalizer or Oppressor in Healthcare?

Published: December 7, 2022

Jason Tucker, Assistant Professor of the AI and the Everyday Political Economy of Global Health project at the Department of Global Political Studies, Malmö University. This blog is adapted from a speech delivered during the ‘AI for Humanity and Society Conference 2022’.

Is AI an equalizer or oppressor in healthcare? This was the rather mammoth, though deeply pertinent, question that I was asked to discuss during the WASP-HS AI for Humanity and Society Conference 2022. To unpack it, I think we need to focus on two areas. The first is to examine where this question is being asked in AI health? By whom? And what are the common themes, disagreements, nuances, and sites of friction? This line of inquiry is very enlightening, and at times disturbing. Abeba Birhane’s (keynote speaker at the conference) fantastic keynote on racial bias in datasets being a harrowing example of the latter.

But I want to focus on where this question is not being asked. Where are borders around spaces for participation on decision making in AI health being drawn? Where are we failing to identify decisions on AI health that impact us, our health, lives, and societies? For I believe considerable vigilance is required here. This is because the narratives or imaginaries of AI health matter, a lot. They do not only set us on certain paths, but also have a bordering function, limiting the space for participation and political scrutiny by crowding out alternative pathways.

A concrete example of this was the widespread adoption of AI health during the COVID-19 pandemic. The initial hype around AI and its potential to tackle the pandemic soon evaporated. Its clinical applications were oversold and found wanting. However, the crisis of the pandemic allowed for a “state of exception” where AI was applied in a broad range of applications to manage the public (health) and healthcare systems. These applications, introduced with little oversight, have in many cases remained in place even when states have moved beyond the crisis narrative and returned to a state of “normality”.

This is not to say that there are not significant benefits of AI health. AI applications did help mitigate some negative aspects of the pandemic, and have proved beneficial in other areas of healthcare. To put my cards on the table, I am very much a cautious optimist of AI health. Yet, I am deeply concerned that the space for participation and political scrutiny in the type and purpose of AI health has been narrowed considerably during this “state of exception”.

One could be tempted to pass this off as a kneejerk reaction. It was a crisis after all. Decisions had to be made fast, and there wasn’t time for much, or any, deliberation. However, one can see a long-term trend where the spaces to discuss AI health have been shrinking. For example, the Nordic national artificial intelligence strategies, a consolidated point of public policy decision making on AI, share similar narrow socio-technical imaginaries of AI health. Ever improving efficiency, reducing costs, more individualised treatment, greater privatisation, these are all entrenched and common narratives in these policy documents. These narratives, which are shared by key private sector actors, are largely supported by both sides of the political spectrum in the Nordics. As for discussions around other future imaginaries of AI health, these spaces are closed or shrinking fast. This will mean that raising counter hegemonic narratives in these political spaces could become increasingly problematic.

What about a future where all innovation in AI health is led by the public sector? What if reducing inequality in health was the driving force in the adoption of AI health? How about a future where there are no AI applications in health at all? The current bordering function of the narratives of AI health make these ideas seem better suited to science fiction than on the floor of parliaments. But it is imperative that we keep spaces for participation and a range of narratives alive if we are to be able to scrutinise the decision making behind AI health. AI is neither good nor bad, but a reflection of culture, society, and power. Vigilance is required as health is not only central in how we as individuals interact with society, but healthcare is a fundamental aspect of how democracies are organised.

To work towards AI being more of an equalizing than oppressive force, spaces for dialogue and political scrutiny about AI health need to be identified, opened and access to these facilitated for meaningful participation. This is by no means an easy feat, but failure to do so may see the gap between the public interest and the dominant narratives of AI health widen.

Recent Blog Posts