Blog Post

Doing Research with AI: Changing Academic Practices and Doctoral Training

Published: February 17, 2026

What are approapriate ways to use artificial intelligence (AI) tools when carrying out research? How to set up a good conversation between supervisors and students around expectations of AI tool use in thesis production? These questions were explored during the recent WASP-HS workshop “Critical AI in Higher Education”, and the researchers involved now welcome further engagement.

Have you ever used generative AI, such as CoPilot or ChatGPT. to help you write an abstract for a conference? Do you use AI tools to analyse your data? Or search for literature for a scoping review? Are you cheerfully already doing all of the above, or cringing in horror at the idea? These kinds of questions about use of AI tools (and particularly generative AI tools) are becoming increasingly common within the research community and often produce lively debates about the boundaries of “good” academic practice.

Engaging with these questions is critical for a community which aims to “to advance understanding of how AI impacts humanity and society”, as WASP-HS does. However, in the time that the program has been running we have also seen the roll-out of generative AI tools in unexpected and thought-provoking ways. Which means: AI is no longer only our research object, but also the means of doing our very research. Or at least, it could be if we wanted it to.

Talk, Trust, Time and Transparency

At a recent workshop at the WASP-HS Winter Conference, doctoral students and supervisors came together to discuss the challenges and limitations of using AI tools to carry out our research.Initial guidance on the topic and policies on use of generative AI for Lund University were presented and discussed in a panel between Elin Sporrong, Anna Foka, and Rachel Forsyth and Eva Åkesson.

Rachel Forsyth and Eva Åkesson shared their experiences of gathering information and formulating a set of guidelines on use of generative AI. They talked about the principles of Trust, Time and Transparency that guided their work developing policy, highlighting the importance of doctoral supervisors modelling the kinds of AI-savvy behaviours they would like to see from their students. As part of this they noted a need for universities to provide the support structures necessary for enabling this.

Anna Foka and Elin Sporrong lifted up more practical challenges, talking about experiences of having texts revised or written with ChatGPT, the expectations of different disciplines and the question of sustainability. Their reflections also lifted up the role of power – how much agency, for example, do doctoral students have to set the agenda in the question of using AI tools?

The importance of talking (a fourth “T” to add to Trust, Time and Transparency) about AI tools – both with colleagues and peers as part of a learning dialogue, but also between supervisors and students to set up expectations – was a clear result from the panel, and fed into the small group exercise that happened in the second part of the workshop.

Responsibility, Openness, Risks, and Benefits

The group was divided into students and supervisors, and – furnished with a set of examples of generative AI use from Rachel Forsyth and some prompts from the organisers – encouraged to discuss what a template for a conversation around use of gen AI tools might look like. Discussions were prompted in the two groups through questions such as “when should a conversation between supervisor and student take place?”, or “who should start those conversations (the supervisor, the student, the Director of Graduate Studies ?), and also “where should these conversations be hosted: in formal organisational meetings at the Faculty/Department level, or rather in informal ways in supervisory meetings?. As the two groups reconvened to present the main points discussed, some interesting differences and similarities emerged. Students framed their conversations around aspects of responsibility and openness as part of a shared effort to harness AI tools to develop a more critical stance. Supervisors instead focused their conversations on how to discuss risks and benefits of AI, and which “academic” table would be best suited for sharing information with students.

Engage in the Conversation

Throughout the workshop it was pointed out that WASP-HS has the possibility to join up conversations between universities and compare policies and guidelines that are being formulated. Meanwhile, many of those present reflected on the different disciplinary expectations around doing research, and how AI tools fit (or not) into this picture. It was clear that the WASP-HS community is a place where cutting-edge expertise on AI can come together with epistemological and methodological concerns around carrying out research in an age where AI tools play an active role in that process.

We wish to continue the conversation on doing research with AI, and we warmly invite you to join us. To do this, we have created a mailing list to engage conversation. Please email Katherine Harrison (katherine.harrison@liu.se) and Valentina Fantasia (valentina.fantasia@lucs.lu.se) to add your name to our mailing list.

Author

Katherine Harrison and Valentina Fantasia

Recent Blog Posts