Doctoral position – Ethics from theory to robot implementations

Lund University

Application deadline:
1 March, 2020


As robots are used increasingly in areas such as domestic, automotive, healthcare, or military settings, safety measures need to be put into place to make sure that robots are not dangerous to humans. Ideally, they should know when they do something wrong. One solution often suggested is something akin to Asimov’s robot laws, but they are problematic as a basis for ethical robots since they require that the robot has a full understanding of the rules, their consequences, and perfect reasoning skills. Similar critique can potentially be put forward for other systems of ethical rules.

The goal of the PhD project is to investigate how different ethical rule-sets can be used to control a robot in practical interaction with humans or other robots. The focus is on the consequences on the interaction between a small set of human or robotic agents. These consequences can potentially be misaligned with the consequences for society that motivates the rules in themselves, and the potential tension between individual and societal consequences is an important area to study. Of particular interest is what happens if agents using different ethical standards are to interact or collaborate. A related questions is under what conditions an agent using a non-ethical behavior can exploit the situation. This relates to the concept of evolutionary stable strategies as studied in behavioural ecology.

The PhD project combines theoretical work with practical experiments.


The thesis work uses an interdisciplinary approach and consists of four main tasks: (1) Systematic analysis of classical ethical theories from an algorithmic perspective. Can they be translated to code that can be run on a robot? (2) Analysis of the practical requirements on the robots abilities to follow each of the different theories; (3) Computer implementation of each of the theories as far as it is possible. These implementations should target the sample scenarios that will be developed in the project; (4) Experimental tests of the different ethical systems in human-robots and robot-robot interaction.

Depending on the preferences and skills of the PhD candidate, the thesis work can focus more on one or two of these tasks. The PhD candidate will have access to the Lund Cognitive Robotics Lab with its infrastructure of humanoid robots and computing equipment and work closely with the other members of the project.

The PhD candidate is expected to take part in the activities of the WASP-HS research school.

Apply here for the position in practical philosophy with focus on AI and ethics


The PhD positions are part of the Wallenberg AI, Autonomous Systems and Software Program on Humanities and Society (WASP-HS) aims to realize excellent research and develop competence on the consequences and challenges of artificial intelligence and autonomous systems for the individual person and society. This 10-year program is initiated and generously funded by the Marianne and Marcus Wallenberg Foundation (MMW) with 660 million SEK. In addition to this, the program receives support from collaborating industry and from participating universities. Major goals are more than 10 new faculty positions and more than 70 new PhDs. For more information about the research and other activities conducted within WASP-HS please visit

The WASP-HS graduate school provides foundations, perspectives, and state-of-the-art knowledge in the different disciplines taught by leading researchers in the field. Through an ambitious program with research visits, partner universities, and visiting lecturers, the graduate school actively supports forming a strong multi-disciplinary and international professional network between PhD students, researchers and practitioners in the field. It thus provides added value on top of the existing PhD programs at the partner universities, providing unique opportunities for students who are dedicated to achieving international research excellence with societal relevance.