Search

The European AI Act

Jure Globocnik

Jure Globocnik

Guest author from activeMind AG

Increasingly, companies use tools that are based on artificial intelligence (AI). Thus far, the use of AI has not been regulated in the European Union (EU) by specific legislation. However, with the AI Act (Artificial Intelligence Act) that was adopted by the European Parliament on 13 March 2024, this is about to change. In this article, we provide a brief overview of the AI systems covered by the AI Act, and the corresponding obligations of the involved companies.

In a nutshell

  • The AI Act has been adopted by the EU Parliament (full text) and still has to be adopted by the European Council.
  • The AI Act will enter into force 20 days after publication in the Official Journal of the European Union; most of the rules will apply after two years, some earlier.
  • The AI Act follows a risk-based approach. AI systems are assessed according to their risk and regulated accordingly.
  • Suppliers, importers, distributors, and deployers of (high-risk) AI systems must fulfil various requirements.

Background of the AI regulation

AI has become increasingly common in our everyday lives. Thereby, the systems that use AI range from largely unproblematic ones – such as your favourite streaming service suggesting you the next movie to watch based on your history – to ones that may have a more significant impact on a person’s life. Examples of the latter are AI-based tools deciding whether you should get a loan or a job, and AI-tools embedded into military appliances.

Thus far, such AI systems were only subject to generally applicable rules stemming from other areas of law, such as data protection law and criminal law. As the first jurisdiction worldwide, the EU is adding to these laws a specific legal regime on AI.

Applicability of the AI Act

The AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The AI Act divides AI systems into several categories with a set of specific rules for each category. AI systems that do not fall under any of these categories are outside the scope of EU regulation and hence not subject to any specific rules.

Furthermore, the AI Act will also not apply to areas outside of the scope of EU law, such as national security, and to systems used exclusively for military or defence purposes or for the sole purpose of scientific research and development. The use of AI for non-professional reasons is outside of the scope of the AI Act as well.

Risk based approach of the AI Act

The AI Act follows a risk-based approach. According to the AI Act, AI systems can be categorised into four risk categories:

  • unacceptable risk (prohibited AI practices),
  • high risk (high-risk AI systems),
  • limited risk (AI systems intended to interact with individuals), and
  • minimal and/or no risk (all other AI systems that are outside the scope of the AI Act).

Furthermore, the AI Act also establishes specific rules for general purpose AI models.

Subjects covered by the AI Act

The most heavily regulated subjects under the AI Act are providers of AI systems, i.e., the subjects that develop an AI system or that have an AI system developed and place it on the market or put it into service under its own name or trademark, whether for payment or free of charge.

Importers and distributors have distinct obligations under the AI Act as well. An importer is a subject established in the EU that places on the market an AI system of a provider established outside the EU, while a distributor is any other subject in the supply chain that makes an AI system available on the EU market.

Finally, the AI Act also imposes certain obligations on deployers (users) of AI systems. A deployer is any subject using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.

AI systems covered by the AI Act and the corresponding obligations

According to Art. 5 of the AI Act, certain AI-based practices shall be prohibited in the EU in their entirety. The list enumerates AI systems that in the view of the EU legislature contravene European values, for instance by violating fundamental rights, and would pose an unacceptable risk to the affected individuals.

This applies, among others, to following AI systems:

  • AI systems used for the purpose of social scoring,
  • AI systems used for the purpose of cognitive behavioural manipulation,
  • real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, whereby certain exceptions apply, such as for targeted searches for specific potential victims of crime,
  • AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage (likely an answer to the practices of Clearview AI),
  • AI systems for emotion recognition in workplace and education institutions.

Most of the provisions of the AI Act pertain to AI systems that create a high risk to the health and safety or fundamental rights of natural persons (so-called high-risk AI systems). They are divided into two categories.

The first category covers AI systems intended to be used as safety components of products, or are themselves products, that, according to the EU legal acts listed in Annex II to the AI Act, are required to undergo a third-party conformity assessment. This category covers AI systems used as safety components in medical devices, lifts, certain vehicles and aircrafts, among others.

The second category covers stand-alone AI systems with fundamental rights implications. The list of such AI systems is provided in Annex III of the proposed AI Act and includes, for example:

  • AI systems intended to be used as safety components in the management and operation of certain critical infrastructures,
  • AI systems intended to be used for the purpose of determining access to educational and vocational training institutions, for assessing students of such institutions, or used in admission tests for such institutions,
  • in employment context, AI systems intended to be used for recruitment purposes (advertising vacancies, screening or filtering applications, evaluating candidates), for making decisions on promotions and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of employees, and
  • AI systems intended to be used to evaluate the creditworthiness of individuals or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.

Thereby, AI systems covered by Annex III shall not be considered as high risk if a specific exception applies (e.g., if they merely perform a narrow procedural or a merely preparatory task, or if the system is intended to improve the result of a previously completed human activity). This assessment needs to be documented, and the AI system nonetheless needs to be notified to the EU database for high-risk AI systems listed in Annex III.

In line with the risk-based approach, these high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment. Among others, providers of high-risk AI systems need to establish a quality management system that shall ensure compliance with the AI Act, and a risk management system covering the entire lifecycle of a high-risk AI system. Furthermore, the AI Act requires them to draw up detailed technical documentation on the AI system.

If data is used to train the model, the data sets used for training, validation and testing need to comply with the requirements set forth in Art. 10 of the AI Act.

The AI Act also contains certain technical requirements for high-risk AI systems. For example, they have to generate logs while being in operation, thereby guaranteeing the traceability of the system’s functioning. High-risk AI systems shall be developed in a way that they can be effectively overseen by natural persons when they are in use. Among others, this includes providing a “stop” button or a similar procedure by way of which, the AI system can be safely stopped. Furthermore, high-risk AI systems shall be designed and developed in a way to ensure that their operation is sufficiently transparent so as to enable users to interpret the system’s output and use it appropriately.

If the provider is not established in the EU and directly provides its AI system to the EU market, it will be obliged to appoint an authorised representative in the EU.

Beside the providers of high-risk AI systems, other subjects have distinct obligations with regard to high-risk AI systems as well. This holds true for manufacturers of products covered by some of the EU pieces of legislation listed in Annex II to the AI Act. If they, under their own name, place a product on the EU market in which a high-risk AI system is embedded, they will have the same obligations as the provider of the AI system.

Importers and distributors of high-risk AI systems will, in particular, have to assess whether the provider has taken all the measures required by the AI Act. If they have a reason to believe that the AI system is not in conformity with the AI Act, they will have to ensure that appropriate corrective measures are taken before placing the AI system on the EU market.

Furthermore, according to the AI Act, any distributor, importer, deployer or other third-party shall be considered a provider under the AI Act if it places on the market or puts into service a high-risk AI system under its own name or trademark, if it modifies the intended purpose of a high-risk AI system, or substantially modifies the high-risk AI system.

Deployers shall use high-risk AI systems in accordance with the provided instructions of use, carefully select input data, monitor the operation of the high-risk AI system, and keep logs. Certain users of high-risk AI systems (such as public bodies and private operators providing public services) will in some cases also have to conduct a fundamental rights impact assessment before starting using a high-risk AI system, in which the AI systems’ impact in the specific context of use shall be assessed.

The AI Act introduces certain transparency obligations for some systems that interact with individuals. In particular, this concerns three types of systems:

  • The providers of systems intended to interact with individuals, such as for example AI-based chatbots, shall ensure that persons using such systems are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.
  • Providers of AI systems that create synthetic audio, image, video or text content shall ensure that the outputs are marked in a machine-readable format and detectable as artificially generated or manipulated, unless an exception applies (for instance, if the AI system only performs an assistive function for standard editing or does not substantially alter the input data or the semantics thereof).
  • Deployers of an emotion recognition system or a biometric categorisation system shall inform the affected individuals of the operation of the system.
  • Deployers of an AI system that creates so-called deep fakes shall disclose that the content has been artificially generated or manipulated.

In case such a system fulfils the criteria for a high-risk AI system, the requirements imposed on such systems will have to be fulfilled in addition to the transparency obligations mentioned in this section.

One of the most controversial issues in the AI Act negotiations was the regulation of general purpose AI models, i.e., AI models can be used for many different purposes.

While the initial AI Act proposal did not contain any rules hereto, the European Parliament insisted on including specific provisions on such models in the negotiations. This is likely a response to the sudden broad availability and popularity of general purpose AI models such as GPT-4, as incorporated in Open AI’s ChatGPT.  

The AI Act now regulates such models, defined as AI models, including when trained with a large amount of data using self-supervision at scale, that display significant generality and are capable to competently perform a wide range of distinct tasks and that can be integrated into a variety of downstream systems or applications. All such models will have to comply with specific requirements. A subset of such models, the so-called high impact general purpose AI models with systemic risk (determined, among others, based on the total computing power used for training), will be subject to an additional set of requirements.

Measures in support of innovation and AI literacy

The AI Act provides for certain measures aiming at supporting innovation in the field of AI. Besides specific derogations for micro, small and medium sized enterprises, the AI Act also allows for introducing AI regulatory sandboxes in which providers will be able to test their AI systems under strict regulatory oversight before these systems start being used, and real-world-testing.

All providers and deployers of AI systems ore obliged to take appropriate measures to ensure a sufficient level of AI literacy of their staff. Thereby, they have to take into account their technical knowledge, experience, education and training and the context in which the AI systems are going to be used as well as the groups of persons on which the AI systems will to be used. Importantly, this obligation applies to all providers and deployers of AI systems, even if their AI systems do not even fall within one of the risk categories regulated by the AI Act.

Regulatory AI oversight

According to the AI Act, each EU Member States should designate a national supervisory authority for the purpose of supervising the application and implementation of the AI Act.

All national supervisory authorities shall be represented in the European Artificial Intelligence Board that should act as a coordination platform and an advisory body to the European Commission.

In addition, an AI Office has already been established within the European Commission, and is tasked with overseeing the enforcement over general purpose AI models.

Penalties under the AI Act

According to the AI Act, high penalties shall be possible in case of its violations. Like the General Data Protection Regulation (GDPR), the AI Act caps the penalties by setting forth an amount (in millions of Euros) and a percentage of the company’s total worldwide annual turnover for the preceding financial year, whereby the higher amount shall serve as the limit for a penalty.

The penalties shall be limited at EUR 35 million or 7 % of the company’s total worldwide annual turnover for the preceding financial year for breaches of the rules on prohibited AI practices, EUR 15 million or 3 % of the company’s turnover for other violations, and EUR 7.5 million or 1 % of the company’s turnover for the supply of incorrect information to the authorities.

Readers that know the GDPR sanctions regime will notice that under the AI Act, the fines are even higher than in data protection law.

When will the AI Act provisions become applicable?

The AI Act was passed by the European Parliament on 13 March 2024. The Act still needs to be formally adopted by the European Council. Thereafter, the AI Act will be published in the Official Journal of the EU.

The AI Act shall enter into force 20 days after its publication and will, in principle, start applying two years after its entry into force. Certain provisions will become applicable even earlier. Most importantly, the provisions on prohibited systems shall start applying six months after entry into force of the AI Act, and the rules on general purpose AI models shall become applicable after one year. On the contrary, obligations for high-risk systems set forth in Annex II of the AI Act shall start applying only three years after the entry into force of the AI Act.

Given that the AI Act will have the nature of a regulation, the rules will be directly applicable in all EU Member States; a transposal into national laws is not necessary.

What should companies do at this point?

Companies are well advised to start preparing for compliance with its provisions as early as possible.

In particular, this holds true for the providers of high-risk AI systems. The AI Act requires such companies to not only adopt extensive governance structures and prepare appropriate documentation but might also likely result in the need to modify their AI systems (e.g., to have it produce logs or to integrate a “stop” button). Once the AI system is compliant with the AI Act, a conformity assessment will have to be conducted as well.

The companies using AI systems provided by other companies should, as a first step, make an inventory of such systems. Thereafter, they should assess their role and the corresponding obligations.

While two years until the start of enforcement of the AI Act may seem like a long period of time, the requirements under the AI Act are substantial, and past experiences with the GDPR have demonstrated that companies starting a few months before the rules become applicable will likely have a hard time achieving compliance in time.

AI Compliance

Reach legal certainty for the development and implementation of artificial intelligence in your company.

Contact us!

Secure the knowledge of our experts!

Subscribe to our free newsletter: