Search

The European AI Act

Jure Globocnik

Jure Globocnik

Guest author from activeMind AG

Increasingly, companies use tools that are based on Artificial Intelligence (AI). Thus far, the use of AI has not been regulated in the European Union (EU) by specific legislation. However, with the proposed AI Act (Artificial Intelligence Act), this is about to change. In this article, we provide a brief overview of the AI systems covered by the proposal, and the corresponding obligations of the involved companies.

Current status of the AI Act

AI has become increasingly common in our everyday lives. Thereby, the systems that use AI range from largely unproblematic ones – such as your favourite streaming service suggesting you the next movie to watch based on your history – to ones that may have a more significant impact on a person’s life. Examples of the latter are AI-based tools deciding whether you should get a loan or a job, and AI-tools embedded into military appliances.

Apart from generally applicable rules stemming from other areas of law – such as data protection law and criminal law – AI systems are currently not subject to any AI-specific regulations. As a part of its Digital Strategy, the European Commission published a proposal for the so-called AI Act. In early December 2023, the European Commission, the European Parliament, and the European Council have reached a political agreement on the AI Act.

While the final text of the AI Act is still to be published, this article provides an overview of the current status of the legislative procedure.

Applicability of the AI Act

The AI Act defines an AI system as a machine-based system that infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The AI Act proposal divides AI systems into several categories with a set of specific rules for each category. AI systems that do not fall under any of these categories are outside the scope of EU regulation and hence not subject to any specific rules.

Furthermore, the AI Act shall also not apply to areas outside of the scope of EU law, such as national security, and to systems used exclusively for military or defence purposes or for the sole purpose of research and innovation. The use of AI for non-professional reasons shall be outside of the scope of the AI Act as well.

Risk based approach

The AI Act proposal follows a risk-based approach. According to the proposal, AI systems can be categorised into four risk categories:

  • unacceptable risk (prohibited AI practices),
  • high risk (high-risk AI systems),
  • limited risk (AI systems intended to interact with individuals), and
  • minimal and/or no risk (all other AI systems that are outside the scope of the AI Act).

Furthermore, the AI Act shall also establish specific rules for general purpose AI models.

Subjects covered by the AI Act

The most heavily regulated subjects under the AI Act are providers of AI systems, i.e., the subjects that develop an AI system or that have an AI system developed with the aim of placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.

Importers and distributors have distinct obligations under the AI Act as well. An importer is a subject established in the EU that places on the market or puts into service an AI system of a provider established outside the EU, while a distributor is any other subject in the supply chain that makes an AI system available on the EU market without affecting its properties.

Finally, the AI Act also imposes certain obligations on users (deployers) of AI systems. A user is any subject using an AI system under its authority, except where the AI system is used in the course of a personal nonprofessional activity.

AI systems covered by the AI Act and the corresponding obligations

According to Art. 5 of the AI Act proposal, certain AI-based practices shall be prohibited in the EU in their entirety. The list enumerates AI systems that in the view of the EU legislature contravene European values, for instance by violating fundamental rights, and would pose an unacceptable risk to the affected individuals.

This applies, among others, to following AI systems:

  • AI systems used for the purpose of social scoring,
  • AI systems used for the purpose of cognitive behavioural manipulation,
  • real-time remote biometric identification systems in publicly accessible spaces by law enforcement, whereby certain exceptions apply, such as for targeted searches for specific potential victims of crime,
  • AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage (likely an answer to the practices of Clearview AI),
  • AI systems for emotion recognition in workplace and education institutions.

Most of the provisions of the AI Act proposal pertain to AI systems that create a high risk to the health and safety or fundamental rights of natural persons (so-called high-risk AI systems). They are divided into two categories.

The first category covers AI systems intended to be used as safety components of products that, according to the EU legal acts listed in Annex II to the AI Act, are subject to a third-party ex-ante conformity assessment. This category covers AI systems used as safety components in medical devices, lifts, certain vehicles and aircrafts, among others.

The second category covers stand-alone AI systems with fundamental rights implications. The list of such AI systems is provided in Annex III of the proposed AI Act and includes, for example:

  • AI systems intended to be used as safety components in the management and operation of certain critical infrastructures,
  • AI systems intended to be used for the purpose of determining access to educational and vocational training institutions, for assessing students of such institutions, or used in admission tests for such institutions,
  • in employment context, AI systems intended to be used for recruitment purposes (advertising vacancies, screening or filtering applications, evaluating candidates), for making decisions on promotions and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of employees, and
  • AI systems intended to be used to evaluate the creditworthiness of individuals or establish their credit score, except where such AI systems are systems put into service by small scale providers for their own use.

In line with the risk-based approach, these high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment. Among others, providers of high-risk AI systems need to establish a quality management system that shall ensure compliance with the AI Act, and a risk management system covering the entire lifecycle of a high-risk AI system. Furthermore, the AI Act proposal requires them to draw up detailed technical documentation on the AI system.

If data is used to train the model, the data sets used for training, validation and testing need to comply with the requirements set forth in Art. 10 of the proposal.

The proposal for the AI Act also contains certain technical requirements for high-risk AI systems. For example, they have to generate logs while being in operation, thereby guaranteeing the traceability of the system’s functioning. High-risk AI systems shall be developed in a way that they can be effectively overseen by natural persons when they are in use. Among others, this includes providing a “stop” button or a similar procedure by way of which, the AI system can be safely stopped. Furthermore, high-risk AI systems shall be designed and developed in a way to ensure that their operation is sufficiently transparent so as to enable users to interpret the system’s output and use it appropriately.

If the provider is not established in the EU and directly provides its AI system to the EU market, it will be obliged to appoint an authorised representative in the EU.

Beside the providers of high-risk AI systems, other subjects have distinct obligations with regard to high-risk AI systems as well. This holds true for manufacturers of products covered by some of the EU pieces of legislation listed in Annex II to the AI Act. If they, under their own name, place a product on the EU market in which a high-risk AI system is embedded, they will have the same obligations as the provider of the AI system.

Importers and distributors of high-risk AI systems will, in particular, have to assess whether the provider has taken all the measures required by the AI Act. If they have a reason to believe that the AI system is not in conformity with the AI Act, they will have to ensure that appropriate corrective measures are taken.

Furthermore, according to the AI Act proposal, any distributor, importer, user (deployer) or other third-party shall be considered a provider under the AI Act if it places on the market or puts into service a high-risk AI system under its own name or trademark, if it modifies the intended purpose of a high-risk AI system, or substantially modifies the high-risk AI system.

Users shall use high-risk AI systems in accordance with the provided instructions of use, carefully select input data, monitor the operation of the high-risk AI system, and keep logs. Certain users of high-risk AI systems (such as hospitals, schools, insurances and banks) will also have to conduct a fundamental rights impact assessment before starting using a high-risk AI system, in which the AI systems’ impact in the specific context of use shall be assessed.

According to the AI Act proposal, certain transparency obligations shall be introduced for some systems that interact with individuals. In particular, this concerns three types of systems:

  • The providers of systems intended to interact with individuals, such as for example AI-based chatbots, shall ensure that persons using such systems are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.
  • Users of an emotion recognition system or a biometric categorisation system shall inform the affected individuals of the operation of the system.
  • Users of an AI system that creates so-called deep fakes shall disclose that the content has been artificially generated or manipulated.

In case such a system fulfils the criteria for a high-risk AI system, the requirements imposed on such systems will have to be fulfilled in addition to the transparency obligations mentioned in this section.

One of the most controversial issues in the AI Act negotiations was the regulation of general purpose AI models, i.e., AI models can be used for many different purposes.

While the initial AI Act proposal did not contain any rules hereto, the European Parliament insisted on including specific provisions on such models in the negotiations. This is likely a response to the sudden broad availability and popularity of general purpose AI models such as GPT-4, as incorporated in Open AI’s ChatGPT.

Thereafter, the EU legislature has agreed that the AI Act shall also regulate such models, defined as large AI systems capable to perform a wide range of distinctive tasks, such as generating video, text, images, computing, or generating computer code. All such models will have to comply with specific transparency requirements. A subset of such models, the so-called high impact general purpose AI models with systemic risk (determined based on the total computing power used for training), will be subject to an additional set of requirements that will be implemented through codes of practice.

Measures in support of innovation

The AI Act proposal provides for certain measures aiming at supporting innovation in the field of AI. Besides specific measures for small-scale providers and users, the proposal also allows for introducing AI regulatory sandboxes in which providers will be able to test their AI systems under strict regulatory oversight before these systems start being used, and real-world-testing.

Regulatory oversight

According to the proposal, each EU Member States should designate a national supervisory authority for the purpose of supervising the application and implementation of the AI Act.

All national supervisory authorities shall be represented in the European Artificial Intelligence Board that should act as a coordination platform and an advisory body to the European Commission.

In addition, an AI Office shall be established within the European Commission, tasked with overseeing the enforcement over general purpose AI models.

Penalties

According to the AI Act proposal, high penalties shall be possible in case of its violations. Like the General Data Protection Regulation (GDPR), the AI Act proposal caps the penalties by setting forth an amount (in millions of Euros) and a percentage of the company’s total worldwide annual turnover for the preceding financial year, whereby the higher amount shall serve as the limit for a penalty.

The penalties shall be limited at EUR 35 million or 7 % of the company’s total worldwide annual turnover for the preceding financial year for the most severe violations (prohibited AI practices, rules on data and data governance), EUR 15 million or 3 % of the company’s turnover for other violations, and EUR 7.5 million or 1.5 % of the company’s turnover for the supply of incorrect information to the authorities.

Readers that know the GDPR sanctions regime will notice that under the AI Act, the fines are even higher than in data protection law.

What’s next in the legislative procedure?

In early December 2023, a political agreement on the AI Act among the involved EU institutions has been reached. Currently, the text of the AI Act is being finalised by the involved EU institutions. It is expected that the AI Act will be passed in early 2024.

Only after the AI Act has been passed, will companies have certainty about the exact rules contained in the AI Act.

The AI Act shall, in principle, start applying two years after its entry into force. According to the press releases, certain provisions shall become applicable even earlier: The provisions on prohibited systems shall start applying six months after entry into force of the AI Act, and the obligations for general purpose AI governance shall become applicable after one year. On the contrary, obligations for high-risk systems set forth in Annex II of the AI Act shall start applying only three years after the entry into force of the AI Act.

Given that the AI Act will have the nature of a regulation, the rules will be directly applicable in all EU Member States; a transposal into national laws will not be necessary.

What should companies do at this point?

Once the AI Act has been passed, companies are well advised to start preparing for compliance with its provisions as early as possible.

In particular, this holds true for the providers of high-risk AI systems. The AI Act requires such companies to not only adopt extensive governance structures and prepare appropriate documentation but will also likely result in the need to modify their AI systems (e.g., to have it produce logs or to integrate a “stop” button). Once the AI system is compliant with the AI Act, a conformity assessment will have to be conducted as well.

The companies using AI systems provided by other companies should, as a first step, make an inventory of such systems. Thereafter, they should assess their role and the corresponding obligations.

While two years until the start of enforcement of the AI Act may seem like a long period of time, the requirements under the AI Act are substantial, and past experiences with the GDPR have demonstrated that companies starting a few months before the rules become applicable will likely have a hard time achieving compliance in time.

AI Compliance

Reach legal certainty for the development and implementation of artificial intelligence in your company.

Contact us!

Secure the knowledge of our experts!

Subscribe to our free newsletter: