European Union Artificial Intelligence Act

Introduction

The European Artificial Intelligence Regulation (AI-Regulation) came into force on 1st August 2024. It is the world's first legal regulation on artificial intelligence.

What is the background to the AI Regulation?

Artificial intelligence (AI) and its wide-ranging applications have revolutionised various areas, from creative activities such as writing poetry to practical tasks such as drawing up contracts. As AI continues to advance, its role in our lives will only increase, significantly impacting our daily routines and interactions. However, it is important to realise that the impact of AI is not without risk. The increasing popularity of AI has led to global concerns about privacy, intellectual property rights and counterfeiting. These concerns have arisen with the increasing prevalence of AI technologies in society.

What exactly is the purpose of this Regulation?

According to point (1) of the recitals of the AI Regulation, the purpose of the AI Regulation is to establish a harmonised legal framework, in particular for the development, placing on the market, deployment and use of artificial intelligence (AI) systems in the European Union, in line with the values of the Union, in order to promote the deployment of trustworthy artificial intelligence (AI), to protect health, safety and fundamental rights, including democracy, the rule of law and environmental protection, and thus to ensure protection against the harmful effects of AI systems in the Union. At the same time, it aims to support innovation and ensure the free movement of AI-enabled goods and services across borders, thereby preventing EU Member States from restricting the development, commercialisation and use of AI systems, unless expressly permitted by this Regulation.

What exactly does this Regulation regulate?

According to Art. 1 (2) of the AI Regulation, this Regulation lays down harmonising rules for the placing on the market, putting into service and use of AI systems, as well as for the placing on the market of general-purpose AI models. It also lays down certain prohibitions, specific requirements regarding high-risk AI systems and harmonising transparency rules for certain AI systems. Finally, the AI Regulation also lays down rules for market surveillance, governance and enforcement of market surveillance and defines measures to promote innovation with a particular focus on small and medium-sized enterprises, including start-ups.

  • What are AI systems?

= a machine-based system that is designed to operate with varying degrees of autonomy and that, once operational, can be adaptive and derive outputs such as predictions, content, recommendations or decisions from inputs received for explicit or implicit goals that can affect physical or virtual environments.

  • What are general-purpose AI models?

= an AI model, including where such an AI model is trained with a large amount of data under extensive self-monitoring, that has significant general usability and is capable of competently performing a wide range of different tasks, regardless of how it is deployed, and that can be integrated into a variety of downstream systems or applications, excluding AI models that are used for research and development activities or prototyping before being deployed on the market.

What is the scope of this regulation?

The AI Regulation applies to both private and public providers and operators of AI systems inside and outside the EU, provided that the AI system is placed on the market in the EU, or its use has an impact on people in the EU. The AI Regulation categorises AI into different risk classifications.  

  • AI systems categorised as high-risk are subject to strict requirements.

= AI systems that are categorized as high-risk are dealt with and described in the classification rules for high-risk AI systems in Art. 6 of the AI Regulation.

  • AI systems with a limited risk must implement the transparency obligations of the AI Regulation.

= AI systems with a limited risk are systems that offer a wide range of possible applications or integration into other applications. This category therefore poses a lower risk, as they are characterized by a very specific use and very specific functions. Examples here would be AI systems for creating ‘deep fakes’, i.e. AI for generating or manipulating image, video or audio material. In the case of AI systems with limited risk, only the transparency obligation of the AI Regulation must be observed, for example by means of a watermark in the case of ‘deep fakes’. This applies in particular to AI systems that interact directly with natural persons. This is intended to protect the user from deception or misunderstandings.

  • AI systems with a minimal risk are not subject to any special obligations.

= AI systems with a minimal risk are all those AI systems that do not fall into any of the aforementioned categories. Accordingly, the scope of application of the AI Regulation does not apply to them, at least in terms of legal consequences. This means that there are no sanctions for non-compliance with the AI Regulation. Examples of this include AI components in video games or spam filters.

  • AI systems that are considered a threat to people's fundamental rights, i.e. pose an unacceptable risk, are banned altogether.

On the ban on AI systems under the EU-AI Regulation

Exceptions

It should be mentioned at the outset that the Regulation contains the following exceptions. The use of AI for national security, military or defence policy purposes (see Art. 2 para. 3 of the AI Regulation) is not covered by a ban. Cooperation with authorities from third countries or international organisations for the purpose of law enforcement and judicial cooperation is also excluded, although safeguards are required. AI systems used for research and scientific purposes are also exempt under Art. 2 (6) of the AI Regulation. Finally, the ban does not apply to the purely private use of AI systems.

Prohibitions

The next step is the allocation of prohibited Artificial Intelligence. Art. 5 of the EU-AI Regulation contains a list of prohibited AI systems. Manipulative, deceptive AI that induces people to make decisions that would not otherwise have been made and that are at least reasonably likely to cause harm is prohibited. AI that exploits vulnerabilities (age, disability, social or economic position) and significantly influences behaviour is also listed. Here too, reference is made to at least a sufficient probability of harm occurring. Furthermore, the use of AI for biometric categorisation based on sensitive personal data such as race or political opinions is prohibited. There is an exception in the context of criminal prosecution. AI systems that are used for ‘social scoring’, i.e. the evaluation of social behaviour over a longer period of time, are prohibited. AI for real-time biometric remote identification in public spaces is only permitted in the context of serious criminal offences or imminent danger. However, AI for predicting crime without objective facts is not permitted. AI that is used to analyse emotions and without medical or safety reasons is prohibited in the workplace or in educational institutions.

Practical significance of these bans

As a result, the market launch, use and commercialization of AI systems of this type are prohibited.

Outlook

However, it is to be expected that the abstract legal concepts, especially their details, will be the subject of discussion.

What provisions of the AI Regulation are particularly noteworthy?

Special transparency obligations apply; for example, AI systems such as chatbots must clearly inform their users that they are dealing with a machine.

What are the penalties for non-compliance with the AI Regulation?

Infringements relating to the above-mentioned prohibited artificial practices (Article 5) and non-compliance with the data governance provisions (Article 10) may be punishable by fines of up to EUR 30,000,000.00 or up to 6% of the infringer's annual worldwide turnover, whichever is higher. Infringements that do not relate to Articles 5 and 10 can be penalised with fines of up to EUR 20,000,000 or up to 4% of the infringer's annual worldwide turnover, whichever is higher. Providing false or misleading information in response to official requests from the competent authorities may be punishable by fines of up to EUR 10,000,000 or up to 2% of the offender's annual worldwide turnover, whichever is the greater. It is important to note that the exact amount of the fine depends on the specific circumstances of the individual case, including the gravity and consequences of the offence. The size and market share of the offender should also be taken into account. If the offender has previously been fined, this will also be taken into account.

When will the provisions of the AI Regulation apply?

Most of the provisions of the new AI Regulation will apply from 2nd August 2026, with only bans on AI systems that pose an unacceptable risk applying after six months. The governance rules and the obligations for general purpose AI models will come into force after 12 months, and the rules for AI systems embedded in regulated products after 36 months.



Autor: Senem Kathrin Güçlüer