Author

admin

Blog Posts

Will AI be an Equalizer or Oppressor?

Published: December 19, 2022

Ericka Johnson, Professor at the Department of Thematic Studies (TEMA), Linköping University, shares her presentation at the WASP-HS conference AI for Humanity and Society, 2022, in written form.

This is a concern many of us have, and working towards ‘equalizer’ is a goal many of us share. However, I have a problem with the framing of the question: it engages technical determinism, the idea that a technology – an artefact, invention or software – will on its own shape and redirect the path of society. The classic example of technical determinism is a book by Lynn White, Jr., Medieval Technology and Social Change (1962)[i], which posited that the stirrup produced European feudalism, as it allowed knights on horseback to engage in mounted combat, but also required expensive armor and weapons. This thesis was robustly critiqued, even at the time.[ii] But the argument against technical determinism is much deeper than merely debating the rise (and fall) of a particular social structure in Europe’s history. The argument against technical determinism hinges on the fact that it encourages us to believe that a technology (like AI) will be a social actor (an equalizer or an oppressor) because of an inherent quality of and in the technology.

Such a simplified belief misses the complexity of technology’s entanglements with social values, norms, and structures, and with the context and people engaged in contingent practices of technology’s use and mis-use. Historical and sociological analyses of technology have long tried to find more nuanced ways to speak about the way technology is entangled in our social worlds. The theoretical tools (terms and concepts) they engage can help us understand and even predict the outcomes of AI technology’s integration into our life-worlds, as well.

Some of these terms are already quite old. That doesn’t mean they are no longer useful.

Let me start with delegation[iii], a term used to describe how we delegate work to a technology, so that we don’t have to do the work, ourselves. For example, every time we use a door on hinges, we delegate the work of knocking a hole in the wall and patching it up again to hinges. A simple example of this from the world of AI is the way new image recognition software has been trained to recognize cancer tumors in scans. This is delegating the work of radiologists to an AI.

Translation[iv] is another term that can be useful when thinking about the entanglement of AI and society, and refers to the way technology can get a job done, but in doing so sometimes changes (translates) the motivations of users. For example, a speed bump can translate the desire to slow down automobile traffic into a fear of damaging one’s shock absorbers. The result is the same (slower traffic) but the underlying motivation (because slow traffic is good) has been translated into another concern (not damaging the car). Again, we can see this happening with AI technologies. A friend of mine has a self-driving car. One important condition of the self-driving technology is the presence of an alert ‘driver’ who is not driving. Thus, the car asks my friend to please touch the steering wheel every few minutes, just so the car knows she is there, awake, and paying attention. However, the car has translated the value of ‘ensuring an alter driver’ to a different action (make the body in the driver’s seat touch the steering wheel). It probably has the same effect, just as the speed bump slowed down traffic, but the underlying impetus has been translated into something else.

Another tool to think with when predicting our AI-entangled futures is the concept of technical obduracy; how the politics of technological artefacts[v] are long-lasting, which is equally relevant for digital artifacts. A classic example is the way our transportation infrastructure benefits some places (urban areas) and technologies (cars and trucks) over others. In conversations about AI, we’ve already seen that AIs learning from historical data also learn outdated values and norms we have tried to change. But because of technical obduracy, a CV evaluation AI will remember that earlier hiring practices discriminated against women, for example and suggest only hiring men.[vi] Or new facial recognition technologies will be trained on limited datasets and therefore not be able to recognize faces with non-white skin color as easily as white faces.[vii] These examples would lead us to believe that AI will be an ‘oppressor’, if we return to the original question. But such ‘oppression’ is not because of the AI, itself, rather because of how the AI is entangled with our histories, our ways of knowing the world and the questions we think the AI should be helping us answer.

One last theoretical point, though. All of the above assumes that technology is stable. Yet, another lesson we learn from STS and new materialities is that technology can change ontologically when it changes context – what technology ‘is’ is determined by context, use, and relational interactions with other people and things.[viii] So while it is useful to talk about technological affordances in nuanced ways, we should do this with an eye to the contexts, uses and users entangled with the technology.

I started this ramble with a poke at technical determinism. I’d like to end it with a warning against technical exceptionalism. AI is a part of the mess out there, the mess in here, our nows, tomorrows, and yesterdays. And just like everything else, there are many people and organizations and institutions (formal and informal) which will use it to advance their own goals. We have to be aware of that, and of them, and stop focusing on the AI as if it was an ontological discrete object with agency. Rather, I suggest recognizing the relational agency for it and us that is produced in particular, contingent uses[ix], and – like most of the work WASP-HS researchers are doing – encourage us to pay attention to how AI is embedded, entangled, engaged and enrolled in the power structures we, ourselves, are embedded, entangled, engaged and enrolled in.

So, rather than asking if AI will be an oppressor or an equalizer, I suggest we ask how we propose to continue to use AI, with an emphasis on the ‘we’ and the ‘use’, i.e. that we place the responsibility for our futures on us, the humans, and focus our analysis on the ways we engage technology. In the end, we are the ones who will determine if how we use AI will increase social equality or oppression.

References

[i] White, Jr., L. (1962) Medieval Technology and Social Change. Oxford University Press.

[ii] Sawyer, P.H. & R.H. Hilton (1963) Review: Technical Determinism: The Stirrup and the Plough. Review of Medieval Technology and Social Change by Lynn White. In Past & Present, No. 24 (April., 1963), pp. 90-199. (https://www.jstor.org/stable/649846)

[iii] Delegation is discussed by Latour in Johnson, J./Latour, B. (1988) Mixing Humans and Nonhumans Together: The sociology of a Door-Closer. The Sociology of Science and Technology 35(3):298-210. (https://www.jstor.org/stable/pdf/800624.pdf?refreqid=excelsior%3Af2cbb14904754a9ce72f659269ec5b53)

[iv] Ibid.

[v] Winner, L. (1980) Do Artifacts Have Politics? Daedalus 109(1):121-136.

[vi] Wachter-Boettcher, S. (2017) Technically Wrong. Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech. W. W. Norton and Company.

[vii] Buolamwini, J. & T. Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 81:1-15.

[viii] Mol, A. (2002) The Body Multiple. Duke University Press.

[ix] Suchman, L. (2007) Human-Machine Reconfigurations. Cambridge University Press.

Recent Blog Posts