Jonas Ivarsson, University of Gothenburg, project leader in WASP-HS for the project Professional trust and autonomous systems, will investigate how concerns and fears relating to new technology can affect the development of society.
What’s your view of being a part of WASP-HS?
At the most basic level, it will enable me to work on issues that I take to be important in relation to the future of trust and competence. But more broadly the programmatic nature of WASP-HS will offer a rich environment in which to develop and discuss new ideas. I’m looking forward to more collaborations and activities across the different projects.
What are your expectations on WASP-HS?
One of my hopes for this initiative is that an extended group of non-technical researchers now will get the opportunity to work with and learn about the technical foundations of AI. I believe that we need to acquire this deep understanding of AI in order for us to really have a sound and critical discussion of the societal consequences. As researchers, we have to become insiders to make our voices count and the kind of funding and focus that is provided by WASP-HS offers an excellent opportunity for this.
What do you want your own project to lead to?
We hope to better understand trust in relation to the transformative effects on society that can be tied to artificial intelligence and autonomous systems. Here, we seek to understand the unique requirements for trust to form and grow from the inside in the processes emerging around the deployment of new systems.
We will study how instances of distrust emerge, how risks are managed and how concerns of accountability are discussed and handled, and the aim of the project is to develop accountable research designs that will guide future research in areas where machine learning is implicated in professional practices.
We are aware that the questions answered by this research may only provide a small piece in relation to the much larger issue of how to manage the associated technological transformations in a stable manner. But perhaps this work could identify some key practices that operate to counteract feelings of uncertainty and incongruity, things that bring order in a changed landscape. Such understandings could then inform a wide range of settings and situations.
PhD student positions related to the project:
Fair, accountable, and transparent methods for trustworthy machines
Trust and competence in the age of artificial intelligence