Amanda Lindkvist, Research Assistant at the Department of Management and Engineering, Linköping University, reports about the roundtable “The good and the bad of FinTech: when and for whom is it helpful and when and for whom is it detrimental?” at the WASP-HS FinTech Community Reference Meeting which took place on October 5th, 2021.
As AI-based FinTech becomes more widespread it is increasingly important for different stakeholders to consider issues regarding how to set up systems in a way that decreases potential harm to users as well as optimizing the benefits for users. This was the basis of discussion for the WASP-HS roundtable “The good and the bad of FinTech: when and for whom is it helpful and when and for whom is it detrimental?”, chaired by Kinga Barrafrem and Gustav Tinghög.
The participants in the roundtable discussion came from varied backgrounds – with participants from a background in the industry (working directly with developing and implementing FinTech solutions) as well as participants with a background in basic research (with insights into psychological and behavioral phenomena related to how users of FinTech solutions may interact with these systems). As the secretary of the roundtable discussion, I was personally very intrigued to hear these different perspectives meet and hear what the major challenges are as well as what ideas for solutions and improvement arose.
The meeting started out with a brief overview of potential topics and questions to cover in the discussion, presented by Kinga Barrafrem. Three different perspectives to think about the issues from were proposed: the effects on the household economy, the broader ethical perspective of implementing FinTech solutions, and psychological responses of users of these technologies. In addition, the discussion topic was linked to different decision-making strategies outlined by research within behavioral economics. To narrow down the discussion, given the many different FinTech solutions out there, Kinga stated that the focus would mainly be on algorithmic decision-guidance techniques.
A recurring theme within the discussion was how to set up FinTech systems in a way that optimizes the “right” values. How does one set up a system in a way that benefits the users as much as possible – and whose responsibility is it to make these decisions? Who has the right to define what a good financial decision is? Given how vast these questions are, there seemed to be a general agreement between the roundtable participants that there are not any clear answers. However, one interesting point that arose was that companies implementing AI technology to guide decisions need to account for individual users’ personal views of their own financial situation (and update this parameter over time). A related comment was that the industry will have to prioritize user experience over profit to make these AI technologies a positive force.
Another topic that was discussed was how (and whether) to bridge the “privacy-personalization gap”. How does one bridge individuals’ apprehension to sharing personal data with companies with the fact that personal data is essential to programming useful algorithmic decision-guidance software? Within this topic, increased transparency of what the data will be used for was mentioned several times as a solution. However, some obstacles in relation to increased transparency were brought up. One obstacle was how to be transparent in a way that doesn’t result in information overload for the users of the technology. In relation to this obstacle, it was suggested that companies need to be more proactive with how they communicate their handling of personal data and that FinTech companies need to clearly communicate the benefits of sharing data to the user – so that sharing data with these companies is not equated to sharing data with social media companies, where the user does not benefit as much from the company having access to large amounts of data.
The discussion of the “privacy-personalization gap” then led to the topic of financially compensating individuals for their data. It was brought up that this development within the market may reduce aversion to data-sharing. However, ethical issues around creating a market for private data, and how this might create perverse incentives negatively affecting the most financially vulnerable individuals, were brought up.
I left the discussion with a lot of food-for-thought relating both to how I, as a user of the technologies discussed during the session, should think about them as well as useful perspectives to incorporate into future research projects.