Gender fairness in (socio-legal) robotics
During a dinner in Stockholm at the very first workshop of the WASP-HS programme, Ginevra Castellano, a social roboticist, and I discussed how AI technologies may mirror human behavior and, ultimately, social structures. With a focus on existing social structures which may be discriminatory and skewed, we discussed the problematic set of values that may influence design, either as training data or through inclusivity in the design.
We come from highly different disciplinary backgrounds, Ginevra being a professor in intelligent interactive systems at Uppsala University where she leads the Uppsala Social Robotics Lab, and me being a socio-legal scholar, lawyer, and associate professor in technology and social change at LHT, Lund University. This difference provided, evidently, a fruitful ground for approaches on challenges of interdisciplinary character. In the fall of 2020, we developed a proposal for possible collaboration on a pilot project on fairness in social robotics centering on gender, which now is running during 2021.
The Innovative Collaboration Project (ICP) is called Fairness in social robotics: gender as a case study for developing a multidisciplinary framework for social robotics and socio-legal studies, and is funded by WASP-HS with 100 000 SEK during 2021. The project also includes Laetitia Tanqueray as project assistant, who is writing a master’s thesis in sociology of law on the topic of gender and social robotics.
Larsson, S. (2019). The socio-legal relevance of artificial intelligence. Droit et société, (3), 573-593.
• Stefan Larsson: AI transparency and consumer trust
• Ginevra Castellano: The ethics and social consequences of AI & caring robots. Learning trust, empathy and accountability