Deception Aware Autonomous Systems

<Back to Research Seminars

Computational deception refers to the capacity for autonomous systems to engage in interactions where (human or software) agents may be manipulated by hidden information or to believe false information. Designing and engineering deceptive machines for detecting or strategically planning deception is a challenging task from conceptual, engineering, and ethical perspectives, coming with a range of societal trust related concerns. Nevertheless, deception is a fundamental aspect of human interaction. Software agents may benefit from being aware of deception, having ways of detecting as well as strategically planning deception, in order to interact emphatically and personalized with humans or other systems. However, we need to make sure that autonomous systems are transparent about their actions. In this seminar series, we will explore the fundamentals of computational deception, look at its technical challenges, and discuss the relation of computational deception with the increasing demand for transparency of autonomous systems.


Deception Aware Autonomous Systems and Trustworthy AI  21 April, 15:00-16:00
Speaker: Peta Masters, Research Associate Trustworthy Autonomous Systems, King’s College, London

Deception Aware Autonomous Systems – Detecting Deception – 12 May, 11:00-12:00
Stefan Sarkadi, Postdoctoral Fellow at Inria Sophia-Antipoli

Deception Aware Autonomous Systems – Strategic Deception – 9 June, 09:00-10:00
Speaker: Chiaki Sakama, Professor at Department of Systems Engineering, Wakayama University


Andreas Brännström
PhD student in Computing Science, Umeå University