Search

CNIL guidelines on the data protection-compliant use of AI

Venushon Thadchanamoorthy

Venushon Thadchanamoorthy

Guest author from activeMind AG

Artificial intelligence (AI) has become an integral part of modern everyday life. To ensure data protection in connection with the use of AI systems, the French data protection authority CNIL has published a comprehensive range of resources. These information sheets and guidelines are intended to help companies overcome the major challenges in the area of data protection when using AI.

Data protection in the use of artificial intelligence

Data protection must constantly face up to changing technical requirements and deal with new questions and problems. New problem areas also arise with the increasing use of systems that utilise artificial intelligence. If personal data is processed using AI systems, data controllers and data processors must ask themselves to what extent these can also be implemented in compliance with the General Data Protection Regulation (GDPR).

As part of a European strategy to promote the development of artificial intelligence and ensure the reliability of these technologies through regulations, the CNIL provides various resources. These include information on specific aspects of the use of AI in relation to data protection principles. On the other hand, the CNIL provides a guide that companies can use to self-assess their AI and check whether their systems comply with the principles of the GDPR.

Data protection principles for the use of AI

The CNIL points out data protection principles from the GDPR and French data protection law with a list of basic principles. These should first be clarified when using AI. The aspects mentioned by the CNIL are:

  1. Intended use
  2. Determination of the legal basis
  3. Compilation of a database
  4. Data minimisation
  5. Defining a storage duration
  6. Monitoring the continuous development of the system
  7. Protection against the risks associated with AI models
  8. Provision of information and explanations
  9. Implementation of the exercise of data subject rights
  10. Monitoring of automated decision-making
  11. Audit of the system
  12. Preventing algorithmic discrimination

In addition to the general aspects of data protection law that also apply to conventional processing activities, there are new specific points as well as known aspects that are of increased importance in the context of AI. This includes, for example, information on automated decision-making, which frequently occurs in AI systems. It is therefore necessary to introduce mechanisms that enable human intervention.

Furthermore, information is provided on the restriction of processing activities, the determination of retention periods, the monitoring and continuous improvement of AI systems and the implementation of measures to ensure the rights of data subjects. However, particular attention should be paid to points 6, 7, and 12, as they need to be clarified specifically in the context of AI systems.

Guide to the self-assessment of deployed AI

The self-assessment guide contains various checklists and questionnaires that are divided into different modules. This modular system is used to systematically analyse and evaluate the use of AI. These checklists and questionnaires are divided into different sections to enable a structured approach to the assessment of the planned use of AI.

These modules help organisations to make a comprehensive and informed assessment of their AI project and ensure that it complies with the necessary data protection principles and legal requirements.

The checklists and questionnaires are divided into the following modules:

The purpose of this module is to clarify the objectives of the guide and define important terms. In this context, the CNIL distinguishes between different categories of stakeholders, namely providers, users, and end users.

  • Provider: “The provider is a natural or legal person, authority, institution or other organisation that develops an AI system or has it developed in order to place it on the market or put it into operation under its own name or brand, whether in return for payment or free of charge.”
  • User: “Any natural or legal person, authority, institution or other organisation that uses an AI system under its own responsibility, unless the system is used in the context of a personal, non-business activity.”
  • End user: “The user of the AI system should not be confused with the end user, i.e. the person affected by the system: the concept of use therefore corresponds to use in a business context.”

With reference to the definitions of the GDPR, the CNIL states that providers and users can assume the role of controller or data processor if the AI system processes personal data.

The second module focuses on general aspects that should be considered when determining the objectives and proportionality of an AI system. The questions in this module are intended to help providers assess the risks and proportionality of the system as early as the design phase.

This module is dedicated in particular to the training data of an AI system. It aims to determine whether the data protection requirements for the creation of a compliant database are met. This module also contains information on the quality of the algorithm with regard to the training data used. In addition, the risks of discrimination associated with AI systems are addressed, emphasising the need to carefully examine the training database for signs of possible discrimination.

The questions in this module are aimed at designing and developing a reliable algorithm. The challenges associated with training protocols are also addressed. Providers should ensure the quality of the system through controls and precautions. Note, however, that these issues may also be of interest to users (if they are not providers), as they may be held liable in the event of unlawful processing activity.

This module deals with questions relating to responsibilities and the documentation of processing activities. The module asks questions about transparency and again about ensuring the quality of processing activities.

The sixth module focuses on the security of processing activities and covers aspects such as attack patterns, logging, access controls and other security aspects at all stages of processing by an AI system.

With this module, the CNIL focuses on the rights of data subjects and shows providers which aspects are particularly important in this context. The questions deal with the effects on the rights of data subjects and the need for sufficiently transparent information. The module also includes specific questions on automated decision-making in accordance with Art. 22 GDPR and emphasises that (unless an exception applies) human supervision is always required to confirm or replace the automated decision.

The eighth module refers to other standards, laws and certifications, including international standards such as ISO and IEEE as well as numerous French certifications. This emphasises compliance with national laws. Reference is also made to the need to carry out a data protection impact assessment in accordance with Art. 35 GDPR.

In the last module, the CNIL provides a list of numerous publications and standards that providers can use for a more detailed assessment of processing activities. In addition to further guidelines and questionnaires, these resources also include links to various tools for the assessment and compliant development of AI systems.

Conclusion

With the help of this self-assessment, companies should be able to independently assess the maturity level of their AI systems with regard to data protection regulations. The guidelines and tools provided were deliberately formulated in general terms in order to ensure broad applicability for a wide range of industries, AI types and deployment scenarios. Following the self-assessment, the expertise of specialists who can address the specific data protection aspects in connection with your own AI system should therefore be sought. This ensures that comprehensive and customised compliance with data protection regulations is guaranteed.

The resources provided by the CNIL will facilitate the data protection-compliant use of AI systems. These guidelines and questionnaires should be an essential step in a comprehensive strategy to promote the development of such technologies, but also to ensure data protection. New problem areas will continue to emerge, which will raise new questions in the field of data protection regulation.

It therefore remains to be seen how this will develop in the future, especially under the expected AI Act of the European Union. Similar guidance should also be published by other authorities in the future.

AI Compliance

Reach legal certainty for the development and implementation of artificial intelligence in your company.

Contact us!

Secure the knowledge of our experts!

Subscribe to our free newsletter: