< Winter Conference 2025
Winter Conference Workshop
Recoding Responsibility:
Rethinking Liability for Trust and Harm in AI Systems
Moderators: Samuel Carey, PhD student in Law at Stockholm University
Sarah de Heer, PhD student at Faculty of Law, Lund University
Sonia Bastigkeit-Ericstam, PhD student in Civil Law at Stockholm University
Subhalagna Choudhury, PhD student at the Department of Business Studies at Uppsala University
In 1508, the citizens of Autun, France, devised a bizarre legal case: rats were charged with destroying the barley harvest. Bartholomew Chassenée, representing the rodents, argued as if his clients were legal persons endowed with legal rights. He contended that his clients’ absence from court was due to their wide dispersal across the countryside and the court needed to go to greater length to notify them. Moreover, he demanded the court ensure an armed escort to ensure the rats’ safety, protecting them from potential predators.
Centuries later similar questions about responsibility and protection arise— now in the context of Artificial Intelligence (AI). Legal frameworks, such as the proposed AI Liability Regime strive to assign fault in an era where technologies are increasingly opaque, unpredictable, and distributed. Recently, a lawsuit was filed against the creators of an AI chatbot following the suicide of a teenage user. This tragic case underscores the pressing question: How should responsibility be understood and allocated when AI causes harm?
However, this question extends to numerous domains where AI is utilised. Who is liable when lethal autonomous weapons kill civilians? Who is responsible when surgical robots malfunction and result in harm? Who answers for financial losses when AI prioritizes one value over another?
The concept of legal personhood, understood as a bundle of legal norms, has emerged as a potential framework for addressing these questions. Traditional liability models struggle to address the distributed causation shared among developers, operators, and users. At the same time, public trust in AI depends on transparency, accountability, and enforceable safeguards. Could granting legal personhood to AI simplify liability, regulatory ambiguities, and bolster trust, or would it shield human actors from accountability?
Goals of the workshop
In this workshop, we will explore the interconnected themes of trust, liability, and personhood – as contextualised within your projects – in the development and governance of “trustworthy AI.” Open to all academic disciplines, the session aims to foster interdisciplinary discussions and insights. Participants are encouraged to reflect and challenge traditional legal models and consider alternative solutions that decentre human agency and acknowledge the distributed and networked nature of AI systems. The goal is to foster a broad, interdisciplinary understanding of the legal complexities surrounding AI and to collaboratively examine paths toward fair and effective governance.
Submission details
Potential participants are required to submit an abstract of no more than 250 words, outlining concerns about personhood, trust, and liability in their work. These concepts will be interpreted broadly, encouraging a diverse range of perspectives. References, if included, must fit within the word limit.
Abstracts should be e-mailed to sarah.de_heer@jur.lu.se no later than Friday 31 January 2025.