Jonas Ivarsson, Principal Investigator of the WASP-HS project Professional Trust and Autonomous Systems reports on a conception of, and experiments with, trust.
Just as we got ready to start with our new research project, the pandemic hit, and the possibilities of doing our planned fieldwork in the medical domain were severely restricted. In this situation, we somewhat shifted focus to explore the deeper conceptual underpinnings of trust.
The idea was that by addressing the foundational questions of how trust figures in interpersonal relations, it should be possible to understand better what happens when the position of the other becomes replaced by seemingly intelligent technology. And by extension of this reasoning, we can also ask what it means to act in a landscape potentially permeated by fake or artificial actors. I.e., in what sense is interaction with the artificial itself contrived as a mere simulacrum?
An antonym of trust is suspicion, and since one concept can inform the other, we have examined circumstances where suspicion is occasioned as a natural phenomenon. One set of materials used for this investigation has been a collection of two-party telephone conversations where one of the speakers is (potentially) a bot. We have looked at the analyses made by the parties to the conversation and secondary comments of the same interactions, which have been made available online. Those comments display classification practices (human/artificial, real/fake) based on the interactions’ identified production features (the details of talk like pauses, hitches, intonation, etc.).
The explorations have pointed us in the direction of several philosophical debates on the notions of what it means to understand and to Wittgenstein’s discussion of aspect perception.
In his book Philosophical Investigations, Wittgenstein refers to the gestalt psychologist Jastrow and the Duck-Rabbit illusion. This picture can be seen either as a duck or as a rabbit. The gestalt psychologists wanted to know how this was the case, but Wittgenstein took a different interest. He would distinguish between the continuous seeing of a single aspect and the dawning realization of when a second aspect becomes available. This shift from simply “seeing the rabbit” to a situation where it suddenly can be seen “as” some thing or another is very informative.
When we identify others, this work can be colored by suspicions so that classifications operate on this level of aspect perception. Am I talking to a human or a bot? With a suspicious mind, I will likely find the evidence for my classification in the materials. The rabbit’s ears are there to be seen if I look for them to use the analogy of the picture-rabbit. And in the case of telephone conversations, commentators would find evidence of a robot speaking even though two humans made the conversation. The very possibility of the artificial can be enough for the classification to fail. In this perspective, trust could be seen as the continuous perception in Wittgenstein’s distinction. We never even consider the possibility that the thing we perceive could be otherwise. It simply is.
To explore some of these matters further, I created a couple of short videos. The technology used here is known as synthetic media – sometimes referred to as deep fakes. I trained one system that could generate a voice that that would sound very much like me. And based on a few minutes of video, a different system was able to create realistic-looking footage. Both services are commercially available products, and they can be connected via a simple API. As a result, this technology can take any text as input and output a reasonably realistic video of me speaking those exact words.
But to what extent would this be believed? By sequencing the release of the two videos, it was possible to get some indication. There were no reports of doubt or suspicion about the first video. The aim was always to disclose the deception as soon as possible and raise the general awareness of what this technology can accomplish nowadays. In the future, however, I’m guessing that all my videos will be met by skepticism. Perhaps that is the price I will have to pay for this experiment.
Video 1: “A short reflection on our need for video-based content and some punk rock!”
Video 2: “On Deep Fakes and Synthetic Longevity”