Author

admin

Blog Posts

Reflections on “Ethics in AI-driven educational practices”

Published: June 7, 2021

I’m Clàudia Figueras, a doctoral student at Stockholm University, and I’m affiliated with WASP-HS. My supervisors are Harko Verhagen and Tessy Cerratto Pargman. I’m interested in studying how AI systems affect people, especially those who are already disadvantaged in society.

My research focuses on the impacts rather than the intentions with AI systems for society. Specifically, I’m interested in AI applied in the public sector since all the citizens are potentially affected by systems tax-funded by the citizens themselves. In what follows, I’ll reflect on the roundtable: What do we talk about when we talk about ethics in AI-driven educational practices? which I attended last week.

The roundtable brought diverse stakeholders from the private and public sectors who discussed how AI might impact higher education practices. Higher education is of particular interest nowadays. In the current pandemic, students and teachers had to change their ways of interacting radically, and technology has played a crucial role in such interactions. Even though Sweden is, on the whole, a very well-prepared country to teach online, issues arose during these times, which should make us reflect on what went wrong and what went well to learn from it. Some participants voiced the need to specify the AI systems we talk about when talking about AI ethics since every AI system presents its own ethical challenges. Given the manifold definitions of AI, I found that it is valuable to clarify what we mean by AI-driven practices when discussing them in relation to ethics.

We sometimes talk about ethics when we should be talking about power. On those lines, Ria Kalluri’s statement resonated deeply with me “Don’t ask if artificial intelligence is good or fair, ask how it shifts power.” (Kalluri, 2020). Power imbalances are found in almost any human interaction. Still, AI systems designed to benefit the people who are already in power will only perpetuate those imbalances or make them worse. We must find ways to boost the voices of those most historically disadvantaged and enable them to contest or influence AI systems that affect them. Higher education is a key sector to engage with issues of power; there is an implicit power imbalance between educational institutions and students.

Also, it is striking that, in general, in discussions regarding ethics and AI, it is somewhat taken for granted that AI should be designed and implemented.  The possibility of not designing AI in the first place becomes out of the question. In the article “When the implication is not to design (technology),” this possibility is discussed (Baumer and Silberman, 2011). In this vein, my question is: what is the problem that AI is intended to solve in the public sector and, most importantly, AI for whom and by whom?

References

Baumer, E. P. S., and Silberman, M. S. (2011) ‘When the implication is not to design (technology),’ in Proceedings of the 2011 annual conference on Human factors in computing systems – CHI ’11. The2011 annual conference, Vancouver, BC, Canada: ACM Press, p. 2271. DOI: 10.1145/1978942.1979275.

Kalluri, P. (2020) ‘Don’t ask if artificial intelligence is good or fair, ask how it shifts power,’ Nature, 583(7815), pp. 169–169. DOI: 10.1038/d41586-020-02003-2.

Recent Blog Posts