Computational deception refers to the capacity for autonomous systems to engage in interactions where (human or software) agents may be manipulated by hidden information or to believe false information. Designing and engineering deceptive machines for detecting or strategically planning deception is a challenging task from conceptual, engineering, and ethical perspectives, coming with a range of societal trust related concerns. Nevertheless, deception is a fundamental aspect of human interaction. Software agents may benefit from being aware of deception, having ways of detecting as well as strategically planning deception, in order to interact emphatically and personalized with humans or other systems. However, we need to make sure that autonomous systems are transparent about their actions. In this seminar series, we will explore the fundamentals of computational deception, look at its technical challenges, and discuss the relation of computational deception with the increasing demand for transparency of autonomous systems.
The WASP-HS Research Seminars are intended to present and discuss ongoing research on a broad range of exciting topics of relevance for WASP-HS. Seminars are held online once a month and organised in a series of 3-4 seminars with a common theme. WASP-HS researchers and invited national and international leading scholars present research results, ongoing research, or visions for future directions, followed by an open discussion.
This spring the series called Deception Aware Autonomous Systems is running over three seminars. This seminar, Deception and Trustworthy AI – The Wrong Thing for the Right Reasons?, is the first out of three, see all seminars below.