Author

admin

Blog Posts

Trust and transparency in AI-systems

Published: January 9, 2020

Stefan Larsson, Lund University, is project leader for the WASP-HS project AI transparency and consumer trust. The project focuses on consumers – basically all of us, but in a market setting – and how we trust AI-systems that we interact with or get exposed to, now and in the future.

What does the start of WASP-HS mean to you?

First, this research and this program is highly important and urgently needed. And, the importance goes further than any of the individual projects, including mine.

As a socio-legal scientist, I’ve followed and, to some extent, contributed to the growing insights in this multidisciplinary field – the ethical, social and regulatory concerns around AI and autonomous technologies, for example. But I have also witnessed the challenges of getting this type of social scientific and humanistic research properly funded. As AI, its technologies and methods, has increased in precision and utility it has become more applied and implemented – democratized, one could say – with the emergence of a great number of challenges linked to its potential benefits.

How we address these challenges, I’d argue, will come to define how we as a society will manage to make use of these technologies and methods in the near future. And arguably, the ethical and normative concerns is relevant not only for their implementation but also for their development, implying an intrinsic need for a multidisciplinary approach at its core, not only as an afterthought.

On a more personal level, the program gives me a chance to develop a multidisciplinary research group on new technologies and society. Together with co-PI Fredrik Heintz, a computer scientist at Linköping University, I wish to develop a multidisciplinary setting that manages to bridge disciplinary understandings in order to combine the best and most relevant aspects of legal, social and technical knowledge in the field of AI related to trust and transparency.

What would you like to contribute to?

Our project focuses on consumers – basically all of us, but in a market setting – and how we trust AI-systems that we interact with or get exposed to, now and in the future. It is addressing transparency in particular; how to improve explainability in an everyday sense, and what type of explanations and understandings are appropriate and reasonable in order to create fair and accountable uses. It also includes contemporary governance over consumer markets, so it includes a legal perspective, something that not the least the supervisory authorities increasingly have to address.

We want to contribute to regulatory development – we need good policy in these fields – particularly for consumer markets to become sufficiently transparent when it comes to data and automation. Furthermore, and beyond the consumer frame, I see transparency issues as a key value that may dictate the possibilities for other values, such as accountability and detection of unfair practices or outcomes. Transparency is never a comprehensive solution but a very multifaceted concept that requires delicate balancing of interests.

This is why I hope to contribute to a production of knowledge of relevance also for other implementations, for example in health care, in addition to the specific consumer focus in this project.

More on the project: AI transparency and consumer trust
About Stefan Larsson, Lund University

Open positions related to the project AI transparency and consumer trust:

  1. Doctoral student project with social scientific focus on AI Transparency and Consumer Trust
  2. Doctoral student project with legal focus on AI Transparency and Consumer Trust

Image: Sara Arnald.

Recent Blog Posts