Loading Events

« All Events

  • This event has passed.

Deception Aware Autonomous Systems: Strategic Deception

June 9, 2022 @ 9:00 am - 10:00 am

About
Program
Registration
More

Computational deception refers to the capacity for autonomous systems to engage in interactions where (human or software) agents may be manipulated by hidden information or to believe false information. Designing and engineering deceptive machines for detecting or strategically planning deception is a challenging task from conceptual, engineering, and ethical perspectives, coming with a range of societal trust related concerns. Nevertheless, deception is a fundamental aspect of human interaction. Software agents may benefit from being aware of deception, having ways of detecting as well as strategically planning deception, in order to interact emphatically and personalized with humans or other systems. However, we need to make sure that autonomous systems are transparent about their actions. In this seminar series, we will explore the fundamentals of computational deception, look at its technical challenges, and discuss the relation of computational deception with the increasing demand for transparency of autonomous systems.

The WASP-HS Research Seminars are intended to present and discuss ongoing research on a broad range of exciting topics of relevance for WASP-HS. Seminars are held online once a month and organised in a series of 3-4 seminars with a common theme. WASP-HS researchers and invited national and international leading scholars present research results, ongoing research, or visions for future directions, followed by an open discussion.

This spring the series called Deception Aware Autonomous Systems is running over three seminars. This seminar, Deception Aware Autonomous Systems: Strategic Deception, is the third out of three. See all seminars below.

Program

Please note that the whole event takes place online via Zoom and is held in English.

21 April, 15:00-16:00

Deception and Trustworthy AI: The Wrong Thing for the Right Reasons?

Speaker: Peta Masters, Research Associate Trustworthy Autonomous Systems, King’s College, London

After five years working on the dark side, endeavouring to develop deliberately deceptive AI – with ambitions towards fully autonomous deceptive systems – I have lately switched to what is apparently the side of the angels: working with the UK Research Institute’s Trustworthy Autonomous Systems Hub. Our brief at the TAS Hub is to develop socially beneficial autonomous systems that are “trustworthy in principle and trusted in practice”. But where my colleagues primarily see the benefits of desirable-sounding attributes such as explainability, reliability, competence… and enthusiastically uncover the various (and sometimes unexpected) features that contribute towards engendering human trust, unsurprisingly thanks to my background, I see some pitfalls. It is these potential pitfalls – the unintended consequences of trusted and trustworthy AS development – that form the body of this talk.

12 May, 11:00-12:00

Modelling Deception

Speaker: Stefan Sarkadi, Associate Researcher at INRIA (France) and Postdoctoral Researcher at King’s College London (UK)

How do we model deception using AI techniques? Why should we do it? And why should we do it in one way rather than another? In this presentation I aim to discuss these questions based on a short summary of my PhD thesis. My PhD thesis is entitled “Deception” and it is the first full computational treatment in Artificial Intelligence (AI) on how to create machines able to deceive.

9 June, 09:00-10:00

Strategic Deception

Speaker: Chiaki Sakama, Professor at Department of Systems Engineering, Wakayama University

Deception is a part of human nature and is a topic of interest in philosophy, psychology, and AI. In this talk, we first overview the definition of deception in the philosophical literature, and
distinguish it from the act of lying. We next introduce a logical account of deception, and illustrate different types of deception that happen in everyday life. Finally, we address a strategic use of deception in debate games where a player may provide false or inaccurate arguments as a tactic to win the game.

Chair

Andreas Brännström
PhD student in Computing Science, Umeå University.

Registration

Registration is closed. All registered will recieve an e-mail with more information closer to the event.

More

Details

Date:
June 9, 2022
Time:
9:00 am - 10:00 am
Event Category:

Venue

Online via Zoom

Organizer

WASP-HS