< Projects

Two men. One holding his card about to pay, and the other giving a cup of coffee.
Title

AI Transparency and Consumer Trust

About the project

How much does a consumer need to understand artificial intelligence (AI) in order to trust it in commerce, in the insurance company’s application, or in their home voice assistant? How transparent does it need to be to consumers, companies, and supervisory authorities?

These are a few of the questions that will be studied in a project led by Stefan Larsson at Lund University.

Consumers are increasingly interacting with AI and autonomous systems in their everyday lives through recommendation systems, automated decision-making, and voice and facial recognition. There are many benefits and great possibilities, for individuals, service developers, traders, and society as a whole. At the same time, consumer trust and the reliability of these technologies is a threshold in the development of AI.

The research group will mainly study how AI is regulated in the consumer market, consumer attitudes and understanding of AI, and how AI processes can be made more transparent based on a combined social sciences, legal and technological perspective.

Read more about the project and see the research output.

Duration

Start: 1 January 2020
End: 31 December 2024

Project type

MMW

Keywords

artificial intelligence, consumer trust, transparency

Universities and institutes

Lund University

Project members

Stefan Larsson

Stefan Larsson

Associate Professor

Lund University

Prahalad Kashyap Haresamudram

Prahalad Kashyap Haresamudram

PhD student

Lund University

Katarzyna Söderlund

Katarzyna Söderlund

PhD student

Lund University

Laetitia Tanqueray

Laetitia Tanqueray

PhD student

Lund University