Katherine Harrison, Senior Lecturer at Linköping University, is project leader in the WASP-HS project The ethics and social consequences of AI & caring robots, together with Ericka Johnson, Professor at Linköping University.
What would you do if a robot was mean to you? This is the question I posed over a family dinner one evening. “Tell it to stop or run away” my four-year old suggested. “Turn it off” was my partner’s response. The relations we have with our machines are changing as robots enter our daily lives. We and our children are going to be expected to interact with robots as they perform different kinds of care for and with us at different life stages. What will that do to how we – and how the robots – think of care?
The differing responses over the family dinner table hint at why it is important to think carefully about how we envisage our relations with robots. While an adult may simply dismiss the robot’s bad manners by flicking a switch, a child may not understand or feel that a similar response is possible. In close interactions between humans and robots, where care and companionship are the goal, issues of accountability, trust and empathy become key.
Successful interactions between humans and companion robots require a relationship to be established, a rapport in which both human and robot learn, adapt, and grow towards one another. Designing robots to act in a more “human-like” way is an important piece of the puzzle; gestures and movements that suggest emotional responses are a powerful way to create rapport and trust between human and robot.
However, there are clear differences in understandings between the technical and social sciences as to what emotions are, and how they work. Do you think there are “universal” emotions, which are sufficiently generalizable and recognizable to be programmable? Or are emotional expressions specific to cultural contexts, age, gender, socioeconomic status? Can all emotions be easily named and categorized? Or might some be outside the scope of language or difficult to verbalize?
These differences in understanding are highly relevant in the case of social robotics, and particularly when being able to build a rapport is perceived by technical researchers as being key to acceptance of and engagement with the robot.
In our project, we see these disciplinary differences as an important starting point in developing companion robots. Working together with the Uppsala Social Robotics Lab, the Center for Applied Autonomous Sensor Systems and FurHat Robotics, we aim to build an interdisciplinary dialogue in order to engage with the exciting challenges of social robotics. We think that these kinds of interdisciplinary collaboration represent the best chance to integrate insights from the humanities and social sciences to ensure the design and development of such robots is as accessible and equitable as possible.
We want to move from thinking of AI, machine learning and their materialized selves (robots) as interesting objects of enquiry towards thinking of these as opportunities for developing sociotechnical responses to complex challenges.
Photo: Charlotte Perhammar/Linkoping University .