< Projects
AI Transparency and Consumer Trust
How much does a consumer need to understand artificial intelligence (AI) in order to trust it in commerce, in the insurance company’s application, or in their home voice assistant? How transparent does it need to be to consumers, companies, and supervisory authorities?
These are a few of the questions that will be studied in a project led by Stefan Larsson at Lund University.
Consumers are increasingly interacting with AI and autonomous systems in their everyday lives through recommendation systems, automated decision-making, and voice and facial recognition. There are many benefits and great possibilities, for individuals, service developers, traders, and society as a whole. At the same time, consumer trust and the reliability of these technologies is a threshold in the development of AI.
The research group will mainly study how AI is regulated in the consumer market, consumer attitudes and understanding of AI, and how AI processes can be made more transparent based on a combined social sciences, legal and technological perspective.
Start: 1 January 2020
End: 31 December 2024
MMW
Keywords
artificial intelligence, consumer trust, transparency
Universities and institutes
Lund University
Project members
Stefan Larsson
Associate Professor
Lund University
Prahalad Kashyap Haresamudram
PhD student
Lund University
Katarzyna Söderlund
PhD student
Lund University
Laetitia Tanqueray
PhD student
Lund University