Author

Yulia Razmetaeva, Anna-Sara Lind and Sandra Friberg
Yulia Razmetaeva, Researcher, Department of Theology, Uppsala University. Anna-Sara Lind, Professor of Public Law, Department of Law, Uppsala University. Sandra Friberg, Associate Professor, Department of Law, Uppsala University.

Blog Posts

Better Safe Than Sorry: Reflections on the Artificial Intelligence Act

Published: March 4, 2024

On December 8, 2023, the European Parliament and the Council reached an agreement on the final compromise text[1] of the Artificial Intelligence Act (AIA). It is expected that the European Parliament will vote for the AIA on March 13, 2024. [2] The long-awaited and much-discussed AIA continues the course of the EU governing bodies’ wish to enact a legislation that is all-encompassing, comprehensive, as well as focused on existing and potential risks from technological developments. Although a ‘risk-based’ approach most often emerges in debates regarding AIA and is presented as its feature[3], some existing acts, such as the General Data Protection Regulation[4], the Digital Services Act[5], and the Digital Markets Act[6], already reflect the attempts to cope with the challenges of the digital era while trying to protect against future threats. In this brief comment, we would like to highlight the key points of this extensive text, namely first Terminology and Classification, second Goals and Focus, third Prohibitions, four Innovation support measures, and five issues relating to Governance, monitoring and sanctions. We conclude by acknowledging the overall strive to harmonize regulation concerning the use of AI – considering both innovators’, industries’ and consumers’ interests – and the closing of gaps in regulation.

Terminology and Classification

The regulation applies to providers and deployers of AI systems, with a rather extraterritorial interpretation of their activity and outputs, as well as to importers and distributors, product manufacturers and authorised representatives of providers. The regulation also applies to affected persons if they are located in the EU.

Interestingly, and important to notice is that matters relating to defense and scientific developments are excluded from application. The Act shall not apply to AI systems if and insofar placed on the market, put into service, or used with or without modification of such systems exclusively for military, defense or national security purposes. Nor shall it apply to AI systems specifically developed and put into service for the sole purpose of scientific research and development.

1. AI systems definition. The text contains a detailed list of terms and explanations of their definitions. The definition of artificial intelligence or, as specified in the compromise version of the text, ‘artificial intelligence system’ remained the most intriguing matter for a long time during the negotiations.

‘AI system’ is defined as ‘a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’. Autonomy, adaptability and impact, – actual or potential – are the main points of this definition. It thus resembles to other definitions of AI systems that can be found in the OECD-definition[7] and the proposed convention on AI from the European Council’s Committee on Artificial Intelligence (CAI).[8] Most likely, many aspects of this definition will require additional interpretation, especially for example ‘designed to operate’, ‘for explicit or implicit objectives’ and ‘can influence physical or virtual environments’. Yet, at least, the definition of AI systems now allows us to distinguish AI systems from already existing and well known today software systems, programs and models.

2. High-risks AI systems. The AIA contains specific requirements for high-risk AI systems and obligations for operators of these systems. The definition of high-risk AI systems itself remains rather confusing because it contains conditions of application scattered throughout the text of the act and its annexes. In addition, there is a possibility of adding or modifying use cases of high-risk AI systems. On the other hand, we now have a list of the areas in which there are acknowledged risks of using high-risk systems.

3. General purpose AI models and systems. ‘General purpose AI system’ means an AI system based on a general purpose AI model, which, in turn, means ‘a model, including when trained with a large amount of data using self supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications’. Thus, supervision, significant generality and capability to perform a wide range of distinct tasks are key for these models (but these tasks are still set by human beings, albeit indirectly).

Providers of general purpose AI models bear a number of additional obligations, and if necessary, must cooperate with competent authorities, as well as introduce their own codes of practice. This horizontal dimension of commitment is in line with the general trend of business and human rights obligation movement. At the same time, the number of parallel standards and voluntary commitments of AI development companies and deployers regarding human rights is growing rapidly, which does not add stability to the system.

Goals and Focus

The artificial intelligence systems market must function in accordance with the values of the European Union while maintaining a balance between protecting against harmful effects of AI systems in the EU and still support innovation.

1. Health and safety first, then fundamental rights. Despite the mention of commitment to these three fundamental values of the European legal order, namely democracy, the rule of law and human rights, it is health and safety that are brought to the fore of the AIA. The text of the act repeatedly mentions that AI systems can be risky and crucial for the health and safety, and could have an adverse impact on them – or on fundamental rights.

2. Due diligence regarding human rights. The AIA contains the obligation for deployers to conduct a fundamental rights impact assessment, both in general and specifically. For example, an examination of AI systems in view of possible biases that are likely to negatively impact fundamental rights or lead to discrimination is introduced. The act provides for a number of strict obligations regarding high-risk AI systems, as well as voluntary actions carried out by providers and deployers that will ensure the presumption of conformity with legislation requirements. While the strict obligations are more likely in line with precautionary principle, voluntary ‘codes of practice’, developed by the industry in order to demonstrate compliance with certain obligations, may prove insufficient and a weak safeguard.

Prohibitions on Certain Models or Actions

Under certain conditions and consequences of use, the AIA prohibits systems that are likely to produce excessive interference in autonomy and privacy, as well as lead to algorithmic discrimination. This includes systems that use biometric and social score classification, subliminal techniques beyond a person’s consciousness, purposefully manipulative or deceptive techniques, and systems that exploits any of the vulnerabilities of a person or a specific group of persons. In other words, too sensitive and too individualized intervention is prohibited, especially if it is combined with vulnerability (accurate prediction of actions, recognition of emotions, individual predictive policing, wide determining of being in a specific place at a certain time, etc).

There is a direct prohibition of real-time biometric identification with some exceptions applicable to law enforcement agencies, however, under limited circumstances.[9]

Innovation Support Measures

The AI Act promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.

In order to support and facilitate for subjects that will be obliged to comply with the new legislation, the Commission has initiated the ‘AI Pact’ – a voluntary gathering of EU and non-EU companies where best practices can be exchanged and that will aim at a greater understanding of the objectives of the AI Act. The ambition is to foster early implementation of the measures foreseen by the AI Act.[10]

Governance, Monitoring and Sanctions

The compromise text proposes a more centralized governance structure and new governing bodies. In particular the AI Office is established as the centre of AI expertise across the EU[11] and the main body to support the proper implementation of the AIA. As a consequence of the preventive scope of the AIA, it introduces hefty sanctions in order to promote greater compliance with the goal to avoid AI-related damage from occurring. Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company. In light of this, one can understand the need for regulatory sandboxes and the introduction of other supportive measures (e.g. the AI Pact mentioned above) directed towards companies that will be affected by the act.

Concluding Remarks

The compromise text shows significant progress in trying to harmonize the regulation of AI, to do it in an effective way and not to allow considerations of economic benefit to prevail over considerations of protecting fundamental values. At the same time, advertised as an ambitious attempt to properly regulate AI, the act may turn out to be “overripe” when it is put into effect both because of the rapid technological development and because that certain EU authorities, such as the European Commission, clearly has shown determination in AI policymaking and will probably fill the gaps with other regulatory and governance tools. The AIA is, for instance, already complemented with a proposed AI Liability Directive to harmonise certain national non-contractual civil liability rules, that has a compensatory scope (i.e. providing a possibility for those who have suffered AI-related harm to be compensated).[12] Alongside this, the Commission has proposed a new directive on liability of defective products,[13] which would revise the since 1985 existing Product Liability Directive, and more initiatives on different levels are likely to follow.

Bearing in mind the existence of such alternative tools and initiatives, coupled with the overlapping approach to the obligations of providers and deployers, and the complexity of the AIA itself, the implementation of the act may prove to be extremely challenging. It will take a serious system built around the AIA to implement it properly, and the difficulties will start already with the need to interpret some questionable parts of the definition of AI systems. Yet, we are at such a point in the development of AI that it is better to introduce a system that has flaws and correct them than to waste precious time trying to achieve a complete and perfect one.

References

[1]     Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), updated according to the provisional political agreement reached at the fifth trilogue between 6 and 8 December 2023. https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf

[2] EU: European Parliament AI Act plenary vote moved to March 2024. https://www.dataguidance.com/news/eu-european-parliament-ai-act-plenary-vote-moved-march

[3]     See, e.g., Chamberlain, J. (2023) The Risk-Based Approach of the European Union’s Proposed Artificial Intelligence Regulation: Some Comments from a Tort Law Perspective. European Journal of Risk Regulation, 14(1), 1-13. https://doi.org/10.1017/err.2022.38; Paul, R. (2023) European artificial intelligence “trusted throughout the world”: Risk-based regulation and the fashioning of a competitive common AI market. Regulation & Governance. https://doi.org/10.1111/rego.12563; Mahler, T. (2022) Between Risk Management and Proportionality: The Risk-Based Approach in the EU’s Artificial Intelligence Act Proposal. Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence, 247–270. https://doi.org/10.53292/208f5901.38a67238;

[4]     Regulation (EU) 2016/679 of the European Parliament and of the Council of 27.04.2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

[5]     Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19.10.2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act).

[6]     Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14.09.2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act).

[7]     “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

[8]    The OECD-definition, “Recommendation of the Council on Artificial Intelligence” (2019) can be found at: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 and the proposed Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (CAI-Convention) is available at https://rm.coe.int/cai-2023-01-revised-zero-draft-framework-convention-public/1680aa193f. The AIA-definition bears great resemblance to the OECD-definition and is more or less identical but for the word order as can be seen in the previous foot note.

[9]     Exceptions when this can be allowed are exhaustively listed and narrowly defined situations, e.g. searching for missing people, to prevent a specific imminent terrorist threat or to locate or identify a perpetrator or suspect of a serious criminal offence. Those usages are subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

[10]     European AI Office. AI Pact | Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/policies/ai-pact

[11]     European AI Office. Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/policies/ai-office

[12]     Proposal for a Directive of the European parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) COM/2022/496 final

[13]    Proposal for a directive of the European Parliament and of the Council on liability for defective products (revised PLD) COM/2022/495 final

More About the Authors

Yulia Razmetaeva

yulia.razmetaeva@crs.uu.se

Researcher, Department of Theology, Thunbergsvägen 3B, 752 38, Uppsala, Sweden; Associate Professor at the Department of Human Rights and Legal Methodology, and Head of the Center for Law, Ethics and Digital Technologies, Yaroslav Mudryi National Law University, Pushkinska street 77, 61024, Kharkiv, Ukraine. Yulia Razmetaeva is a member of the working groups on Artificial Intelligence, Democracy, and Human Dignity (2020–2024) and The Artificial Public Servant (2022–2026), the projects supported by the Wallenberg Foundations and affiliated with WASP-HS.

Anna-Sara Lind

Anna-Sara.Lind@jur.uu.se

Professor of Public Law, Department of Law, Uppsala University, Trädgardsgatan 20, 753 09, Uppsala, Sweden. Anna-Sara Lind is Principal Investigator in Artificial Intelligence, Democracy, and Human Dignity (2020–2024), the project supported by the Wallenberg Foundations and affiliated with WASP-HS.

Sandra Friberg

Sandra.Friberg@jur.uu.se

Senior Lecturer/Associate Professor, Department of Law, Trädgårdsgatan 1, 753 09, Uppsala, Sweden. Sandra Friberg is Principal Investigator in The Artificial Public Servant (2022–2026), the project supported by the Wallenberg Foundations and affiliated with WASP-HS.

Recent Blog Posts