Blog Post

Making Fairness Concrete for AI in Education

Published: October 3, 2022

Post-doctoral researcher Marie Utterberg Modén and Associate Professor Marisa Ponti at the University of Gothenburg, argue for making fairness “concrete” and its implications for the use of AI in the context of education in Sweden

What is ”fairness” anyway?
There are so many different definitions of fairness, and what is considered fair has evolved over time and space, making it challenging to define. Fairness is often described in terms that are related, but not equivalent, such as equity, justice, unbiased, etc. Each of these terms is equally difficult to define, even though we accept them. As a result of such ambiguity, our own value systems determine what we label fair. Determining what is fair can be challenging if we do not share the same values. For this reason, we argue for a contextualized view of fairness, by situating it in the context of Swedish education. 

In 2020, Borenstein and Howard argue that vague concepts of fairness and bias, separated from the context in which they apply, or without understanding that people are more than just data, do not help resolve the ethical issues emerging from the use of AI in education. As they put forth, addressing complex ethical challenges cannot be done in a vacuum but must be done by “starting with the root of the problem (i.e., people” (62). This observation resonates with that fairness is not a property of the algorithm but a property of the social system.

However, fair-machine learning (ML) designers and researchers tend to abstract away the socio-technical contexts in which ML systems will be used (Selbst et al., 2019). Due to the abstraction of the socio-technical context, fair-ML designers do not capture the broader context, including information necessary to create fairer outcomes, or to understand fairness as a concept (Selbst et al., 2019). 

An algorithm is therefore designed as a statistical model of reality to predict potential outcomes, but even a complex model is inevitably simplified. In educational contexts, teaching and learning activities are characterized by a complex web of relationships and interactions (Selwyn, 2017). Considering how groups are represented, which individuals belong to which groups, and not treating diverse groups as a single entity is crucial. Baker and Hawn (2021) found that when algorithmic models are applied and group differences are ignored, or not accounted for, unfairness results. As an example, if an AIEd system is designed to identify which students are at risk of falling behind and alert teachers to this, how this group should be defined is a critical issue. 

This backdrop led us to start to analyze the conceptualizations of fairness in Swedish education and how they are expressed in a sample of policy documents and reports by EdTech companies. In our research, we seek to answer the following questions:

  • How is the notion of fairness conceptualized and contextualized in Swedish education by policymakers, authorities, and other relevant stakeholders?
  • What potential or benefits are described regarding the use of AI in education, and for whom?
  • When it comes to AI, what risks are perceived for fairness in education?

We used rhetorical analysis, a useful method for critical policy analysis to understand the role policies play in perpetuating inequality (Winton, 2012), to explore the rhetorical situations of relevant social groups (RSGs) (Pinch and Bijker, 1984). Next, we examined the rhetoric of each social group’s interpretation of fairness, as well as their interpretations of artificial intelligence.

Efficiency as the overall value of AI
We now share with you some initial results of our ongoing work in relation to the first two questions. We identified three RSGs, where each group shares similar values regarding AI.

The first group includes the Government, the Agency for Digital Government, and the Swedish Association of Local Authorities and Regions. Here, the overall value of AI is related to efficiency and is interpreted by the group as economical. The second group includes EdTech companies, and they interpret the overall value of AI as pedagogical. The third group includes government agencies, and they value AI as a way to sustain and facilitate accessibility to public welfare of certain individual groups. The three RSGs value the individual right to equal access to high quality public services.

Framed in the context of the Swedish welfare system, education is considered part of the public sector. Fairness is seen as connected to equality, in terms of providing similar opportunities to all (fairer access to and equal quality of education). AI is not mentioned in national digital strategies for education and specific educational needs are thus not addressed. Hence, conditions that can lead to unfairness following the use of AI are not understood.

The main pedagogical value with AI is perceived as residing in the opportunity to personalize teaching to meet the needs of students. Additionally, AI is thought to free up teachers’ time on certain classroom tasks, allowing them to spend more time with their students. However, schools need to allocate resources to train teachers in AI. There is an increasing disparity between students’ achievement in independent and municipal schools and in urban and rural areas that poses a problem for fairness.

Currently, schools with high-performing students are better able to attract more competent teachers and the most competent teachers do not work in schools catering for disadvantage students located in low socioeconomic status (SES) communities in Sweden. We argue that students in low-performing schools will have a high risk to fall behind, learners will not be able to thrive in an AI-saturated future, and teachers will not be trained to use AI competently.

Some provisional conclusions
There seems to be a tendency for the RSGs to follow the “AI-Hype” without questioning its use. Algorithms are seen as improving efficiency, reducing costs, and enhancing personalization of teaching. We contend that the rhetorical arguments are based on a narrow notion of fairness, articulated as formal access to education and detached from ideas of social justice. Furthermore, the ways AI can contribute to students’ learning are not discussed at all. In spite of the high stakes and potential high impact of using AI in education, considerations of importance and priority are absent, as are reflections on the fundamental social, racial, and economic issues of this technology. This lack of considerations about the specific implications of AI in education might be connected to the way AI is developed which lacks “common aims and fiduciary duties, professional history and norms, proven methods to translate principles into practice, robust legal and professional accountability mechanisms” (Mittelstadt, 2019, p. 1). Focusing attention on some of these aspects is what the Missing Teacher project will work on.

References                     
Borenstein, J. and Howard, A. (2021). Emerging challenges in AI and the need for AI ethics education. AI & Ethics, 1:61-65. https://doi.org/10.1007/s43681-020-00002-7

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1: 501–507.

Pinch, T.J., and Bijker, W.E. (1984). The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other. Social Studies of Science, 14(3), 399–441.

Selbst, A.D., Boyd D., Friedler, S.A., Venkatasubramanian, S., and Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. FAT* ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59–68). https://doi.org/10.1145/3287560.3287598

Selwyn, N. (2017). Education and technology. Key issues and debates. Bloomsbury Academic, London.

Winton, S. (2013). Rhetorical analysis in critical policy research. International Journal of Qualitative Studies in Education, 26:2, 158-177.

Author

admin

Recent Blog Posts