Search

Data protection regulation of artificial intelligence (action plan)

Venushon Thadchanamoorthy

Venushon Thadchanamoorthy

Guest author from activeMind AG

The rapid advances in the field of artificial intelligence (AI), particularly in generative AI systems such as ChatGPT, have created a wealth of new opportunities, but at the same time pose significant challenges in terms of protecting privacy and individual freedoms. In response to these developments, the Commission Nationale de l’Informatique et des Libertés (CNIL), the French data protection authority, has published a comprehensive action plan for the regulation of AI.

What are the supervisory authorities doing?

In light of recent developments in the field of AI, particularly in so-called generative AI systems such as ChatGPT, the French data protection authority CNIL has published an action plan for the introduction of AI systems that respect the privacy of individuals.

In recent years, the CNIL has worked intensively on anticipating and responding to the challenges posed by AI. It has already published numerous guidelines and checklists and will extend its efforts to other topics such as generative AI systems, large language models and derived applications (especially chatbots).

The protection of personal data: A fundamental challenge for the development of AI

The development of AI is accompanied by challenges in the area of data protection and individual freedoms, which the CNIL has been addressing for several years. Since the publication of its report on the ethical challenges of algorithms and artificial intelligence in 2017, the CNIL has repeatedly commented on the issues raised by this new technology.

Generative AI in particular has developed rapidly in recent months, be it in the area of text and conversation, via large language models (e.g. GPT) and derived chatbots (e.g. ChatGPT or Bard), but also in the areas of imaging (e.g. Dall-E, Midjourney, Stable Diffusion etc.) or language (e.g. Vall-E).

These basic models and the technological building blocks based on them are already being used in many sectors. Nevertheless, the functioning, possibilities and limits of these systems, as well as the legal, ethical and technical issues associated with their development and utilisation, remain largely unclear.

As the protection of personal data is a major challenge for the design and use of these tools, the CNIL is publishing its action plan on AI, which aims, among other things, to regulate the development of generative AI.

The CNIL’s four-stage AI regulatory action plan

In view of the challenges associated with the protection of freedoms, the ongoing development of AI and current developments in the field of generative AI, the CNIL action plan focuses on the regulation of generative AI.

This regulation is geared towards four objectives:

  1. Understanding how AI systems work and their impact on people.
  2. Promoting and guiding the development of AI that respects the protection of personal data.
  3. Collaboration with innovative players in the AI ecosystem.
  4. The verification and control of AI systems to protect people.

1. Understanding how AI systems work and their impact on people

In the first area of its action plan, the CNIL is focussing on understanding how AI systems work and their impact on people. Artificial intelligence has fundamentally changed the way data is processed. This raises important questions about fairness, transparency and data protection in connection with these systems.

The CNIL will work to ensure that the data processing that forms the basis for the operation of AI systems is fair and transparent. This is crucial to ensure that individual data protection rights are respected and that AI systems do not lead to personal data being misused or to discrimination and bias occurring.

2. Promoting and guiding the development of AI that respects the protection of personal data

In the second area of the action plan, the CNIL is committed to promoting and guiding the development of AI systems that protect personal data. This is of crucial importance, as AI systems are often trained on extensive data sets.

The CNIL will develop clear policies and guidelines to ensure that data protection is taken into account in the development of AI systems. This includes the creation of guidelines for the selection and use of data for training, compliance with data protection principles and the guarantee of data subjects’ rights.

3. Cooperation with innovative players in the AI ecosystem

The third area of the action plan aims to support and promote innovative players in the AI ecosystem. This includes cooperation with research teams, research and development centres and companies that are developing or would like to develop AI systems.

The CNIL wants to ensure that these actors work in accordance with data protection and respect the fundamental rights and freedoms of citizens. This is an important step to ensure that AI systems are developed in an ethical and legally compliant manner.

4. Review and control of AI systems to protect people

In the fourth area of the action plan, the CNIL focuses on the review and control of AI systems to protect people. This includes monitoring compliance with the CNIL’s previously published positions, such as on the use of “enhanced” video surveillance with AI. In addition, the CNIL will examine complaints received in connection with AI systems.

The CNIL will ensure that actors processing personal data to develop, train or use AI systems carry out data protection impact assessments and take measures to ensure the rights of data subjects.

By implementing these four areas of its action plan, the CNIL aims to establish clear policies that ensure the protection of European citizens’ personal data and thus contribute to the development of privacy-friendly AI systems. This is an important step in a world where AI technologies are becoming increasingly present and are expected to have a positive impact on society.

Conclusion

The publication of the CNIL’s action plan on AI marks a significant step in dealing with the challenges posed by the use of AI technologies. The CNIL has recognised that the protection of personal data and compliance with data protection principles are key to ensuring that AI systems are used ethically and in compliance with the law.

By understanding how AI systems work, promoting privacy-friendly AI developments, supporting innovators in the industry and monitoring compliance, the CNIL helps to create clear policies that ensure the protection of privacy and individual freedoms in an increasingly digital world.

The action plan emphasises the importance of striking a balance between technological progress and the protection of fundamental rights. In the context of the regulatory landscape (including the AI Act) on the topic of AI, the way in which the CNIL and other authorities deal with the topic will play a role.

AI Compliance

Reach legal certainty for the development and implementation of artificial intelligence in your company.

Contact us!

Secure the knowledge of our experts!

Subscribe to our free newsletter: