Challenges and Opportunities of Regulating AI
WASP-HS Community Reference Meetings (CRMs) are dedicated for public and private organizations in Sweden to learn about challenges and questions of their interest, and for WASP-HS to share recent research development within the program in order to identify opportunities for collaboration in different sectors. This particular Community Reference Meeting concerns the issue of challenges and opportunities of regulating AI.
The development of regulatory frameworks to govern the development, design and application of artificial intelligence is currently an important topic. In April 2021, the EU Parliament published a proposal, the AI Act (AIA), for regulating the use of AI systems and services in the Union market. This proposal puts forward a regulatory vision based on European standards on human rights, democracy and the rule of law. However, the effects of EU digital regulations usually transcend its confines. An example is the General Data Protection Regulation (GDPR), which rapidly became a world standard. The extraterritorial scope of AI should be analysed in the face of other AI governance models currently under development. The AIA adopts a risk-based approach that bans certain technologies, proposes strict regulations for “high risk” ones, and imposes stringent transparency criteria for others. If adopted, the AIA will undoubtedly have a significant impact in the EU and beyond. A crucial question is whether we already have the technology to comply with the proposed regulation and to what extent can the requirements of this regulation be enforceable.
At this CRM, we will analyse how regulation will shape the AI technologies of the future and examine the interplay between national policies and the work of other organisations, by bringing together input and discussions from multidisciplinary stakeholders.
The event will be held in English, and takes place online via Zoom.
14:00 Introduction to WASP-HS
Virginia Dignum, WASP-HS Program Director
14:10 Introductory Keynote
Catelijne Muller, President and Co-Founder of ALLAI
14:30 Roundtable Discussions (in parallel)
The Participation Paradox In the Politics of AI
Chair: Michael Strange, Department of Global Political Studies, Malmö University
Co-Chair: Jason Tucker, Department of Global Political Studies, Malmö University
A core foundation of democracy is the inclusion of voices that are otherwise excluded by power asymmetries in society. Yet, despite being pivotal to the present and future shape of human society, the development of artificial intelligence systems is often said to under-represent key demographics. First, then, how can we increase participation within the development and implementation of AI systems? Second, though, what happens to the concept of ‘participation’ in the era of big data sets that feed AI? For example, widening big data sets to include the life experiences of those on the margins of society seems to promise a means to effectively make AI more inclusive and reduce bias. The interest of big firms in widening data sets could be read as widening societal participation. Yet can we also participate in the development of AI if we refuse access to our data? And how can we ensure that participation is not turned into a merely passive act of being monitored? The roundtable discussion will consider these questions.
Regulating the Use of Algorithms in Public Decision-Making
Chair: Sandra Friberg, Department of Law, Uppsala University
Co-Chair: Yulia Razmetaeva, Head of the Center for Law, Ethics and Digital Technologies at Yaroslav Mudryi National Law University
Algorithms are deciding what taxes you are to pay, whether you will receive social security,
and even whether you are entitled to Swedish citizenship. Automated decision-making is
becoming more common and might eventually replace most human decision-making in the
public administrative agencies. There are today several types of AI-systems in play, each of which pose different challenges and actualize various legal and philosophical questions. You might for instance take a starting point in the following three AI systems: simple algorithms, autonomous artificial systems, and hypothetical systems based on strong AI. Who should be responsible for incorrect decisions taken by an algorithm in these different type of AI-systems, and why? Is there a need for new regulation and how should regulatory challenges be met? Can legislators look to the proposal for an AI Act for guidance on regulatory technics and scope?
The AI Act: Comprehensive but is it Future-Proof?
Chair: Cecilia Magnusson Sjöberg, Professor and Subject Director of Law and Information Technology, Stockholm University
Co-Chair: Liane Colonna, Assistant Professor of Law and Information Technology, Stockholm University
The regulation of AI has become a fiercely debated policy and academic subject. There have been intense discussions by many different stakeholders about whether AI needs specific regulation and, if so, what this regulation should look like. For example, some have contended that existing legal frameworks are sufficient to safeguard individuals and society from the potential adverse effects of AI systems while others have argued that regulation is necessary but that it should take place at the Member State level rather than at the regional or international level. After initially adopting a soft-law approach, in April 2021, the Commission put forward a legislative proposal to regulate AI with binding legal rules. The existence of this proposal reflects a consensus that binding legal regulation of AI is required within the EU but many controversial and thorny issues remain, as well as complex interests that must be balanced. At this roundtable, we invite participants to discuss the proposal, particularly from the perspective of fundamental rights as well as that from innovation and research. Potential questions to be explored include: Is the definition of AI comprehensive, future proof and legally secure? How do we achieve consistency in the myriad of laws applicable AI? How should we understand the risk categories and actually calculate risk? What is the role of standardization and how will this impact the law? Do the restrictions on biometric data go far enough? What about generalized AI? When it comes to governance and oversight, who is doing what (when and at what level?)? Does the proposal take an overly technocratic approach to fundamental rights?
15:30 Summary and Conclusions
16.00 End of the event
The registration closed on May 12.
For further questions regarding the event, please contact our event coordinator.