Blog Post

Researchers in the Social Sciences and Humanities and the General Discourse About AI

Published: January 13, 2023

Teresa Ceratto-Pargman, Professor in Human-Computer Interaction at Stockholm University, and Principal Investigator of the WASP-HS project Ethical and Legal Challenges in Relationship to AI-Driven Practices in Higher Education, shares her speech from the WASP-HS Conference AI for Humanity and Society 2022 in written form.

Oversimplification in the general discourse about AI leads to misconceptions?

The invitation to be a panelist in the WASP-HS annual conference to discuss the following provocation: “Oversimplification in the general discourse about AI leads to misconceptions?” couldn’t come at a better time. Some months ago, our research group started to discuss the design of a critical discourse analysis study on emerging AI-driven practices in higher education. From what I learned during these discussions and readings, my answer is – yes, the oversimplification in the general discourse about AI in education leads to misconceptions.

In what follows, I elaborate on such an answer with the help of the following two related questions that push unpacking my argument about the need to look at how researchers in the social sciences and humanities engage with emerging technologies[1]. Specifically, these questions are i) How do researchers in the social sciences and humanities contribute to the general discourse about AI? ii) How do these researchers study AI?

The aim here is to highlight the need to contribute narratives of AI in society that qualitatively situate AI socially and materially in a specific time and space. Such an aim is instrumental to better understanding the social sciences and humanities role in generating AI discourse and reflecting on the research methods we need for studying AI technologies in the making.

How do researchers in the social sciences and humanities contribute to the general discourse about AI?

The question emerges from an understanding that people construct particular uses and purposes for technology through discourse (Haas, 1996). If discourse matters, how do we, researchers, contribute to the general discourse about AI? And more specifically, how do we speak about AI?

For example, in education, “AI” has become a catch-all term reflecting a generic and rather vague understanding of the specific material and social qualities of the infrastructures that emerging AI technologies are configuring. Current debates generate discussion without any specification of the particular technology referred to, so most often, discourses about AI mobilize opinions and emotions in society (Akenine & Stier, 2019). Discussing superficially technologies that can impact us as a society profoundly contributes to misconceptions and much confusion.

Bearman et al. (2022), in a recent critical discourse analysis based on a literature review of academic articles on AI in higher education, underscore such a generic tone in discussing AI in education. These authors call for a more focused and nuanced discussion on the social aspects of learning/teaching with AI technologies instead of speaking of student performance and teaching efficiency broadly and systematically.  

Strictly speaking, AI is not a technology per se but a whole research field within computer science consisting of a solid and resilient scientific community with a rich historical trajectory dating from 1950 and driven by the ambition of emulating human thinking. As such, discussing AI as it would be “a” technology without referring to its “historicity, materiality and existing imagined futures” (Pink, 2022, p. 3) does not help to engage critically with thecurrent oversimplification in the general discourse about AI.

Interestingly, the systems and applications implicitly referred to under the umbrella term of AI are seldom specified in discourses of AI in education. This is surprising as we know that computational techniques, algorithms, sensors, and the data involved in “AI” for education configure practices and discourses about them differently and profoundly. While technologies such as adaptive learning, predictive modeling, intelligent tutoring systems, automated decision-making, and facial recognition embedded in online invigilator systems or ChatGPT may use similar techniques, they are fundamentally different when deployed in practice due to their specific in-use qualities and uneven impact on people. Their distinct qualities may encourage some practices while discouraging others, and their impact may privilege some people while harming others. However, discussions in education seldom engage with the specificities of AI technologies and the fact that they are emerging socio-technical systems (i.e., systems that emerge from specific social and professional environments). So, instead of looking at the technology at hand and engaging with it critically through the practices it shapes, it is often the case that public, “dominant discourses” (i.e., STEM [science, technology, engineering and mathematics] researchers and industry) speak of AI as “a thing”, “a product that will take up” or people will naturally adapt to (Pink, 2022, p. 1). Not engaging with the specific social and material aspects of AI technologies contributes thus to broad, untethered discourses in education where it is difficult to situate the particular issues and arguments under debate.

We find a tangible example in discourses about AI that focus on the challenges and opportunities without situating the specificities of the technology, the practices, or the groups of people in focus. How can challenges and opportunities of AI in education be identified and discussed if there is no reference to how the technology at hand works in practice or who the groups using AI are? The vast and hypothetical character of the general discourse on AI seems to be at odds with informing the public about the potential implications of AI for diverse groups and multifaceted activities in the education sector and beyond.

This leads us to argue that research in social sciences and the humanities has a pivotal role to play in general discourses about AI that often reproduce the social and technical worlds inhabited by enthusiast computer scientists, engineers, and any other convinced that AI is a ready-made solution to our current societal problems. In this sense, we need to deconstruct oversimplification in general discourses of AI in education and suggest novel, creative research methods that enable us to engage with the ongoing emergence of AI as a socio-technical and cultural phenomenon. This can constitute a strategy to generate critical research discourse on AI situated in specific technological relationships and socio-cultural and societal practices. Thus, we need to investigate the research methods chosen by social scientists and humanists and how they contribute to amplifying and/or criticizing prevailing dominant narratives in education.

How do researchers in social sciences and the humanities study AI in education?

There are quite a few international reports, ethical guidelines, toolkits, and curricula, along with numerous peer-reviewed articles about the risks and opportunities of AI in education, as noted by Holmes et al. (2022) in their recent report: “AI in education. A critical view through the lens of human rights, democracy, and the rule of law”. While this body of work is essential, we, researchers in the WASP-HS project Ethical and Legal challenges in relationship with AI-driven practices in higher education, ask: what are the research methods used to account for the risks and opportunities of AI in education?

The question about methods inspired by Pink (2022) matters because researchers contribute (or not) to the general dominant discourse on AI through the findings they obtain and the methods they choose to use. But most importantly, such a question entices us to think about what it means to research emerging phenomena and why we need interdisciplinarity and multi-stakeholders participation in examining the future. Moreover, this question invites us to reflect on the place qualitative social sciences methods have “in the spaces which engineering, economics, design, and business-oriented disciplines so effortlessly already inhabit” (Pink, 2022, p. 2).

Questions about research methods are indeed non-negligible as it is through them, their features, and epistemological and ontological standpoints, we, researchers in social sciences and humanities, contribute to shaping socio-technical imaginaries in education (Rahm, 2021). In our recent work (Cerratto Pargman, Lindberg, and Buch, 2022), we have discussed methodological issues in automated decision-making (ADM) in teachers’ assessment and student grading practices. We distinguish between representational and futures-oriented research methods.

Representational approaches and methods are oriented towards recounting the teachers’ and/or students’ past experiences as accurately as possible. They “collect data after the fact; data are extracted from the users’ rational representation of emerging technologies and are analyzed to be primarily shared in academic publications.” (Cerratto Pargman, Lindberg, and Buch, 2022). In other words, representational approaches and methods present research challenges for studying emerging, nascent technologies and thereby “becoming conversant with futures-in-the-making.” (Light, 2020, p. 1). 

Future-oriented social science approaches and methods — or just future-oriented methods —collect data about future visions (i.e., facts from the past blended with imagination); data are extracted from the users’ speculative thinking to drive societal change”. (Cerratto Pargman et al., 2022). These methods “assume a circular understanding of what knowing is, based on the idea that knowing (human cognition) is embodied action (Varela 1989), which refers to how knowledge is enacted in the encounter between the researcher and the reality or world in which the researcher takes part, judges its meaningfulness, and contributes to change it”. […] “Future-oriented methods invite researchers to playfully experiment with possible futures in material and experiential (i.e., sensorial, emotional) ways while pushing us to choose ‘conditions to work towards’ and ‘factors we might have to contend with’ (DiSalvo et al. 2016: 150–151). Moreover, positioning us in the future and “researching in possible futures” (Pink, 2022, p. 4) moves us to address concerns about where we are going, what we value, and what we are not ready to lose and why.

The distinction between the two methodological approaches helps us think about the need to create research methods that enable us to generate discourses about how future educational practices and their limits are configured in the present and how futures in education might be experienced to live following the current pathways in AI research (cf. Pink, 2022). It is also an invitation to generate alternative narratives other than those proclaiming the use of AI for optimizing student performance, teachers’ work, and institutional retention rates. On this note, Unesco’s report: Futures of education. A new social contract (2021) is a compelling read that inspires other possible ways to think about our educational futures.

The interest in generating alternate narratives on the future of education brings us to reflect on what it takes to learn to use AI in practice. Curiously, there is not much discourse about how people negotiate tasks, agency, and responsibility with AI technologies in their workplace or educational environments. We know little about how people appropriate specific AI systems in their activities, the tensions they experience in their practices, and how they resolve them. As we don’t consider AI technologies as a thing or a ready-made technology that will automatically be put into use without entailing any effort, friction, or usage development, it is important to pay attention to how people at work or in education learn to use AI and negotiate its place in daily practices. Without such an understanding, there is a considerable risk for research to conceptualize AI technologies as a matter of faith instead of a “matter of concern” (Latour, 2004).

To summarize, currently, there is a need to contribute to discourses on AI in education that specify the types of technologies we refer to and situate the study of AI in time and space and its social and material facets. The time is right to inform about how people learn to use AI in practice, which groups of people get positively and negatively affected, and which practices disappear while others expand. By seeking knowledge about AI usage in these directions, we, researchers, need to be mindful of the methods we choose and the discourses we contribute to so we can open fertile terrains for a well-founded debate on the place of AI in the future of education.

References

Akenine, D., & Stier, J. (2019). Människor och AI: En bok om artificiell intelligens och oss själva. Books on demand.

Bearman, M., Ryan, J., & Ajjawi, R. (2022). Discourses of artificial intelligence in higher education: a critical literature review. Higher Education, 1-17.

Cerratto Pargman, T., Lindberg, Y., & Buch, A. (2022). Automation is coming! Exploring future (s)-oriented methods in education. Postdigital Science and Education, 1-24.

DiSalvo, C., Jenkins, T., & Lodato, T. (2016). Designing Speculative Civics. In C. Lampe, D. Morris, & J. P. Hourcade (Eds.), Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16) (pp. 4979–4990). New York: Association for Computing Machinery. 

Haas, C. (2013). Writing technology: Studies on the materiality of literacy. Routledge.

Holmes, W., Persson, J., Chounta, I. A., Wasson, B., & Dimitrova, V. (2022). Artificial intelligence and education: A critical view through the lens of human rights, democracy and the rule of law. Council of Europe.

Light, A. (2021). Collaborative speculation: Anticipation, inclusion and designing counterfactual futures for appropriation. Futures134, 102855.

Latour, B. (2004). Why has critique run out of steam? From matters of fact to matters of concern. Critical inquiry30(2), 225-248.

Pink, S. (2022). Methods for Researching Automated Futures. Qualitative Inquiry, 10778004221096845.

Rahm, Lina. Educational imaginaries: A genealogy of the digital citizen. Vol. 214. Linköping University Electronic Press, 2019.

UNESCO. (2022). Reimagining our futures together: A new social contract for education. UN.

Varela, F. J. (1989). Connaître: Les sciences cognitives, tendences et perspectivess. Paris: Seuil.


[1] By emerging technologies we refer to technologies that are under technical development and deployment in society.

Author

admin

Recent Blog Posts