AI Transparency and Consumer Trust
How much does a consumer need to understand artificial intelligence (AI) in order to trust it in commerce, in the insurance company’s application, or in their home voice assistant? How transparent does it need to be to consumers, companies, and supervisory authorities?
These are a few of the questions that will be studied in a project led by Stefan Larsson at Lund University.
Consumers are increasingly interacting with AI and autonomous systems in their everyday lives through recommendation systems, automated decision-making, and voice and facial recognition. There are many benefits and great possibilities, for individuals, service developers, traders, and society as a whole. At the same time, consumer trust and the reliability of these technologies is a threshold in the development of AI.
The research group will mainly study how AI is regulated in the consumer market, consumer attitudes and understanding of AI, and how AI processes can be made more transparent based on a combined social sciences, legal and technological perspective.
Start: 1 January 2020
End: 31 December 2024
artificial intelligence, consumer trust, transparency
Universities and institutes
Prahalad Kashyap Haresamudram