BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//WASP-HS - ECPv6.15.17//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://wasp-hs.org
X-WR-CALDESC:Events for WASP-HS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Stockholm:20220502T110000
DTEND;TZID=Europe/Stockholm:20220502T120000
DTSTAMP:20260405T145518
CREATED:20240419T091939Z
LAST-MODIFIED:20240419T091939Z
UID:19221-1651489200-1651492800@wasp-hs.org
SUMMARY:Deception Aware Autonomous Systems: Modelling Deception
DESCRIPTION:About\n			\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Program\n			\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Registration\n			\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				More\n			\n			\n				\n				\n				\n				\n				\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Computational deception refers to the capacity for autonomous systems to engage in interactions where (human or software) agents may be manipulated by hidden information or to believe false information. Designing and engineering deceptive machines for detecting or strategically planning deception is a challenging task from conceptual\, engineering\, and ethical perspectives\, coming with a range of societal trust related concerns. Nevertheless\, deception is a fundamental aspect of human interaction. Software agents may benefit from being aware of deception\, having ways of detecting as well as strategically planning deception\, in order to interact emphatically and personalized with humans or other systems. However\, we need to make sure that autonomous systems are transparent about their actions. In this seminar series\, we will explore the fundamentals of computational deception\, look at its technical challenges\, and discuss the relation of computational deception with the increasing demand for transparency of autonomous systems. \nThe WASP-HS Research Seminars are intended to present and discuss ongoing research on a broad range of exciting topics of relevance for WASP-HS. Seminars are held online once a month and organised in a series of 3-4 seminars with a common theme. WASP-HS researchers and invited national and international leading scholars present research results\, ongoing research\, or visions for future directions\, followed by an open discussion. \nThis spring the series called Deception Aware Autonomous Systems is running over three seminars. Deception Aware Autonomous Systems – Modelling Deception is the second out of three\, see all seminars below. \n			\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Program\nPlease note that the whole event takes place online via Zoom and is held in English. \n21 April\, 15:00-16:00\n			\n				\n				\n				\n				\n				Deception and Trustworthy AI: The Wrong Thing for the Right Reasons? \nSpeaker: Peta Masters\, Research Associate Trustworthy Autonomous Systems\, King’s College\, London \nAfter five years working on the dark side\, endeavouring to develop deliberately deceptive AI – with ambitions towards fully autonomous deceptive systems – I have lately switched to what is apparently the side of the angels: working with the UK Research Institute’s Trustworthy Autonomous Systems Hub. Our brief at the TAS Hub is to develop socially beneficial autonomous systems that are “trustworthy in principle and trusted in practice”. But where my colleagues primarily see the benefits of desirable-sounding attributes such as explainability\, reliability\, competence… and enthusiastically uncover the various (and sometimes unexpected) features that contribute towards engendering human trust\, unsurprisingly thanks to my background\, I see some pitfalls. It is these potential pitfalls – the unintended consequences of trusted and trustworthy AS development – that form the body of this talk. \n			\n				\n				\n				\n				\n				12 May\, 11:00-12:00\n			\n				\n				\n				\n				\n				Modelling Deception \nSpeaker: Stefan Sarkadi\, Associate Researcher at INRIA (France) and Postdoctoral Researcher at King’s College London (UK) \nHow do we model deception using AI techniques? Why should we do it? And why should we do it in one way rather than another? In this presentation I aim to discuss these questions based on a short summary of my PhD thesis. My PhD thesis is entitled “Deception” and it is the first full computational treatment in Artificial Intelligence (AI) on how to create machines able to deceive. \n			\n				\n				\n				\n				\n				9 June\, 09:00-10:00\n			\n				\n				\n				\n				\n				Strategic Deception \nSpeaker: Chiaki Sakama\, Professor at Department of Systems Engineering\, Wakayama University \nDeception is a part of human nature and is a topic of interest in philosophy\, psychology\, and AI. In this talk\, we first overview the definition of deception in the philosophical literature\, anddistinguish it from the act of lying. We next introduce a logical account of deception\, and illustrate different types of deception that happen in everyday life. Finally\, we address a strategic use of deception in debate games where a player may provide false or inaccurate arguments as a tactic to win the game. \n			\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Chair\n			\n				\n				\n				\n				\n				Andreas BrännströmPhD student in Computing Science\, Umeå University. \n			\n				\n				\n				\n				\n				Registration\n			\n				\n				\n				\n				\n				Registration is closed. All registered will recieve an e-mail with more information closer to the event.\n			\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				More
URL:https://wasp-hs.org/event3/deception-aware-autonomous-systems-modelling-deception/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Stockholm:20220519T140000
DTEND;TZID=Europe/Stockholm:20220519T160000
DTSTAMP:20260405T145518
CREATED:20240418T094110Z
LAST-MODIFIED:20240418T094110Z
UID:19215-1652968800-1652976000@wasp-hs.org
SUMMARY:Challenges and Opportunities of Regulating AI
DESCRIPTION:About \n			\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Program \n			\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Registration \n			\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				More \n			\n			\n				\n				\n				\n				\n				\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				WASP-HS Community Reference Meetings (CRMs) are dedicated for public and private organizations in Sweden to learn about challenges and questions of their interest\, and for WASP-HS to share recent research development within the program in order to identify opportunities for collaboration in different sectors. This particular Community Reference Meeting concerns the issue of challenges and opportunities of regulating AI. \nThe development of regulatory frameworks to govern the development\, design and application of artificial intelligence is currently an important topic. In April 2021\, the EU Parliament published a proposal\, the AI Act (AIA)\, for regulating the use of AI systems and services in the Union market. This proposal puts forward a regulatory vision based on European standards on human rights\, democracy and the rule of law. However\, the effects of EU digital regulations usually transcend its confines. An example is the General Data Protection Regulation (GDPR)\, which rapidly became a world standard. The extraterritorial scope of AI should be analysed in the face of other AI governance models currently under development. The AIA adopts a risk-based approach that bans certain technologies\, proposes strict regulations for “high risk” ones\, and imposes stringent transparency criteria for others. If adopted\, the AIA will undoubtedly have a significant impact in the EU and beyond. A crucial question is whether we already have the technology to comply with the proposed regulation and to what extent can the requirements of this regulation be enforceable.  \nAt this CRM\, we will analyse how regulation will shape the AI technologies of the future and examine the interplay between national policies and the work of other organisations\, by bringing together input and discussions from multidisciplinary stakeholders.  \n			\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Program and Roundtables\nPlease note that the whole event takes place online via Zoom and is held in English. \n14:00 – Introduction by Virginia Dignum\, WASP-HS Program Director and Professor in Responsible AI at Umeå University \n14:10-14:30 – Keynote and Q&A  \n			\n				\n				\n				\n				\n				Keynote title: Challenges and Opportunities of Regulating AI \nKeynote: Catelijne Muller\, President and Co-Founder of ALLAI \n			\n				\n				\n				\n				\n				14:30-15:30 – Roundtable discussions \n			\n				\n				\n				\n				\n				The Participation Paradox In the Politics of AI \nChair: Michael Strange\, Department of Global Political Studies\, Malmö UniversityCo-Chair: Jason Tucker\, Department of Global Political Studies\, Malmö University \nAbstract A core foundation of democracy is the inclusion of voices that are otherwise excluded by power asymmetries in society. Yet\, despite being pivotal to the present and future shape of human society\, the development of artificial intelligence systems is often said to under-represent key demographics. First\, then\, how can we increase participation within the development and implementation of AI systems? Second\, though\, what happens to the concept of ‘participation’ in the era of big data sets that feed AI? For example\, widening big data sets to include the life experiences of those on the margins of society seems to promise a means to effectively make AI more inclusive and reduce bias. The interest of big firms in widening data sets could be read as widening societal participation. Yet can we also participate in the development of AI if we refuse access to our data? And how can we ensure that participation is not turned into a merely passive act of being monitored?  The roundtable discussion will consider these questions.  \n			\n				\n				\n				\n				\n				Regulating the Use of Algorithms in Public Decision-Making  \nChair: Sandra Friberg\, Department of Law\, Uppsala UniversityCo-Chair: Yulia Razmetaeva\, Head of the Center for Law\, Ethics and Digital Technologies at Yaroslav Mudryi National Law University  \nAbstract Algorithms are deciding what taxes you are to pay\, whether you will receive social security\,and even whether you are entitled to Swedish citizenship. Automated decision-making isbecoming more common and might eventually replace most human decision-making in thepublic administrative agencies. There are today several types of AI-systems in play\, each of which pose different challenges and actualize various legal and philosophical questions. You might for instance take a starting point in the following three AI systems: simple algorithms\, autonomous artificial systems\, and hypothetical systems based on strong AI. Who should be responsible for incorrect decisions taken by an algorithm in these different type of AI-systems\, and why? Is there a need for new regulation and how should regulatory challenges be met?  Can legislators look to the proposal for an AI Act for guidance on regulatory technics and scope? \n			\n				\n				\n				\n				\n				The AI Act: Comprehensive but is it Future-Proof?  \nChair: Cecilia Magnusson Sjöberg\, Professor and Subject Director of Law and Information Technology\, Stockholm University Co-Chair: Liane Colonna\, Assistant Professor of Law and Information Technology\, Stockholm University \nAbstract The regulation of AI has become a fiercely debated policy and academic subject. There have been intense discussions by many different stakeholders about whether AI needs specific regulation and\, if so\, what this regulation should look like. For example\, some have contended that existing legal frameworks are sufficient to safeguard individuals and society from the potential adverse effects of AI systems while others have argued that regulation is necessary but that it should take place at the Member State level rather than at the regional or international level. After initially adopting a soft-law approach\, in April 2021\, the Commission put forward a legislative proposal to regulate AI with binding legal rules. The existence of this proposal reflects a consensus that binding legal regulation of AI is required within the EU but many controversial and thorny issues remain\, as well as complex interests that must be balanced. At this roundtable\, we invite participants to discuss the proposal\, particularly from the perspective of fundamental rights as well as that from innovation and research. Potential questions to be explored include: Is the definition of AI comprehensive\, future proof and legally secure? How do we achieve consistency in the myriad of laws applicable AI? How should we understand the risk categories and actually calculate risk? What is the role of standardization and how will this impact the law? Do the restrictions on biometric data go far enough? What about generalized AI? When it comes to governance and oversight\, who is doing what (when and at what level?)? Does the proposal take an overly technocratic approach to fundamental rights? \n			\n				\n				\n				\n				\n				15:30-16:00 – Reflections from the roundtables \n			\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Registration\n			\n				\n				\n				\n				\n				Registration is closed. All registered will recieve an e-mail with more information closer to the event. \n			\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				More
URL:https://wasp-hs.org/event3/challenges-and-opportunities-of-regulating-ai/
END:VEVENT
END:VCALENDAR