How to use ChatGPT in your company in compliance with the GDPR

The integration of ChatGPT, the AI chatbot developed by OpenAI, into company processes offers opportunities to increase efficiency and innovation, but also harbours data protection challenges. Companies are therefore required to take proactive measures to ensure data protection compliance when using the popular AI chatbot.

We explain in practical terms what you should bear in mind when using ChatGPT in your company.

The use of ChatGPT in the company

More and more companies are looking to utilise AI technologies. OpenAI’s ChatGPT in particular is at the centre of interest for many companies looking for ways to increase their efficiency and offer innovative services. From automating customer service and generating texts and other content to supporting the analysis of large amounts of data, ChatGPT offers a wide range of possible applications.

While the benefits of AI are clear, the use of ChatGPT and other AI tools raises important questions in the area of data protection. Against this backdrop, supervisory authorities are cautious about the rapid use of AI in business processes and are increasingly taking on a scrutinising role.

For companies, this means that they must take proactive steps during the planning and implementation stages to ensure compliance with the applicable data protection regulations if they want to integrate AI technologies such as ChatGPT into their processes.

Please note: This affects both developers of AI applications (keyword: privacy by design and privacy by default) and users who only include ready-made AI tools in their processes.

ChatGPT is available in several versions, with each version presenting specific data protection challenges. We therefore explain these separately.

Data protection challenges of ChatGPT 3.5 and 4 (free versions)

The free versions 3.5 and 4 of ChatGPT are real favourites with companies looking for straightforward AI solutions for their business processes. However, these widely used free versions come with significant data protection concerns that require attention.

Data processing agreement

When integrating ChatGPT 3.5 or 4 into company processes, ChatGPT or OpenAI as its provider would actually be considered a data processor within the meaning of the GDPR. In accordance with Art. 28 GDPR, it would therefore be necessary to conclude a data processing agreement (DPA) with ChatGPT or OpenAI in order to define the legal commitments with regard to data protection.

However, OpenAI does not provide a non-disclosure agreement (NDA) or a data processing agreement for the free versions 3.5 and 4 of ChatGPT.

Without a valid data processing agreement, the processing of personal data within the meaning of Art. 4 (1) GDPR is not lawful in the context of the use of ChatGPT 3.5 and 4. Therefore, it is not possible for companies wishing to process personal data to use these versions of ChatGPT for this.

Processing activity for training purposes

Furthermore, OpenAI processes data entered via versions 3.5 and 4 of ChatGPT for its own purposes, in particular for training purposes. This means that the data entered can potentially be used to improve the GPT model. This is done by analysing and processing the entered texts in order to identify patterns and optimise the model’s responses.

The processing activity of AI providers for training purposes, especially with regard to personal data, harbours considerable data protection risks. This relates in particular to the dangers of profiling and the potential loss of control over the use of data. For example, the right to erasure (right to be forgotten) in relation to an AI training dataset is almost impossible to enforce.

OpenAI does not obtain user consent for this either, but offers the option of deactivating the processing activity for training purposes in order to take these data protection concerns into account. This option is available under: Settings / Data Controls / Chat & History & Training.

However, this option should be used with great caution, as this does not mean there is no further risk of non-compliance with data protection laws. This setting does not completely deactivate processing activities for training purposes, but only restricts them so that the user’s interactions with ChatGPT are not stored permanently. Instead, the data remains stored in the systems for a limited period of up to 30 days before it is deleted.

Although this limitation reduces the risk that the data will be stored for a long time and possibly misused, the data will still be used to improve ChatGPT’s models during this 30-day period.

It is also important to note that this setting is not synchronised between the different browsers or devices. This means that the restriction of data storage only applies to the specific browser or the specific device on which you have made the setting.

Data protection assessment of ChatGPT 3.5 and 4

It is crucial that companies wishing to use versions 3.5 and 4 of ChatGPT are aware of the data protection challenges and take appropriate measures to completely avoid the processing activity of personal data. The data protection-compliant use of ChatGPT 3.5 and 4 in corporate environments is only possible if no personal data within the meaning of Art. 4 (1) GDPR is processed and no protected trade secrets are disclosed.

This can be achieved, for example, by using pseudonyms instead of personal data. In addition, it is advisable to deactivate data processing for training purposes, even if this does not completely eliminate the risk. 

When uploading documents in version 4, it should also be ensured that no personal data is contained in the documents, such as author names in articles or recipient names in invoices. If such data is present, it should be removed before uploading.

In addition, continuous training of employees on the data protection-compliant use of ChatGPT is essential to ensure a high level of data protection. We also recommend creating a policy on the use of AI tools to ensure that all employees know what they may – and may not – do when using ChatGPT.  

Data protection challenges with ChatGPT API and Enterprise Edition

In addition to the GPT 3.5 and GPT 4 models, OpenAI also offers an API and enterprise editions. These editions are primarily aimed at companies that require customised AI solutions and want more control over their applications. Compared to versions 3.5 and 4, these editions also offer enhanced security measures.

Data processing agreement

One advantage of these versions in terms of data protection law is that OpenAI provides a data processing agreement for its clients. By doing so, OpenAI recognises its role as a data processor and commits to processing the data in accordance with its customers’ instructions.

This includes compliance with obligations to be imposed on a data processor under Art. 28 GDPR, such as assisting in the fulfilment of data subjects’ data protection rights and ensuring the security of data processing.

Processing activities for training purposes and data security

In this context, OpenAI also commits not to use the interactions in ChatGPT for training purposes. This is ensured by specific configurations and mechanisms that prevent the transmitted data from being used in the training process.

In addition, more advanced technical and organisational measures are used in these editions of ChatGPT, such as data encryption in transit and during storage.

SOC 2 certification also exists. SOC 2 is a security standard developed by the American Institute of Certified Public Accountants (AICPA) and focuses on ensuring security, availability, and confidentiality for organisations. While SOC 2 certifications are widespread in the United States, ISO certifications (especially ISO 27001) are more common and more widely recognised in Europe.

It is important to note that while such certifications can validate security measures, they should not necessarily be the sole basis for trust. Even without these certifications, OpenAI should implement appropriate security measures to fulfil users’ data protection requirements. European companies wishing to use OpenAI’s services should therefore not neglect an independent audit of OpenAI’s technical and organisational measures (TOM).

Data protection assessment of ChatGPT API and Enterprise Edition

Taking these aspects into account, the use of the API and Enterprise editions of ChatGPT with personal data would be possible in principle, provided that the company’s data protection officer has checked the OpenAI data processing agreement, including the technical and organisational measures, for data protection compliance and found it to be sufficient with regard to the intended processing activity.

Nevertheless, it is important that the company also takes appropriate precautions when using these ChatGPT versions. This includes, for example, implementing an AI policy that provides employees with clear instructions on how to use ChatGPT securely, including avoiding the disclosure of special categories of personal data or sensitive trade secrets during interactions.

Third country transfer issues when using ChatGPT

As OpenAI is a U.S. company based in California, EU-based companies that want to use OpenAI need to think about the resulting third country transfer. For the transfer of personal data to the U.S. and other third countries, an appropriate level of data protection must actually exist in the recipient country (and not just on paper).

Since OpenAI is not certified under the EU-U.S. Data Privacy Framework, the U.S. company is forced to use alternative methods to ensure an adequate level of data protection when transferring personal data to the United States. OpenAI relies on the Standard Contractual Clauses, which have been recognised by the European Commission as appropriate mechanisms for data transfers outside the European Economic Area (EEA).

However, it must be critically considered that the risk of data transfers to third countries and possible data protection violations still exists. Companies should therefore thoroughly evaluate whether the use of OpenAI services is justified, taking into account data protection requirements and the legal framework. Additional safeguards may be required to ensure data protection and compliance with legal requirements.

To counteract this, OpenAI has announced that users residing in the EEA or Switzerland will enter into a contractual relationship with OpenAI Ireland Ltd. and not with the parent company in California. Despite this change at the contractual level, the challenge of data transfer to third countries remains. A European contractual partner does not necessarily protect against unlawful processing activities by U.S. authorities.

Conclusion: ChatGPT use by companies is possible to a limited extent

Companies can (and should) use the potential of AI technologies such as ChatGPT to optimise their processes. Data protection should not be seen as an obstacle here, but as an integral component that must be taken into account from the outset.

There are various ways to integrate AI technologies into company processes in a data protection-friendly manner. Involving the data protection officer at an early stage is crucial for this. They can advise on what measures are required to minimise data protection risks and ensure that AI projects are implemented in accordance with the applicable data protection regulations.

AI Compliance

Reach legal certainty for the development and implementation of artificial intelligence in your company.

Contact us!

Secure the knowledge of our experts!

Subscribe to our free newsletter: