ChaptGPT: the Italian Data Protection Authority leads the way and imposes GDPR compliance.
The privacy risks of the Generative Artificial Intelligence
The rapid spread of Generative Artificial Intelligence (“GAI”) is giving rise to a heated debate on its privacy implications, particularly with regard to the most popular of these systems: ChatGPT, developed by the US company OpenAI.
The GAI, upon which ChatGPT is based, is in fact a technology capable of creating content and simulating human conversations through training its algorithm with large datasets. The compatibility of the GAI with data protection laws is therefore in doubt, particularly concerning the adequate handling of the data it collects.
The intervention of the Italian Data Protection Authority
On March 30th the Italian Data Protection Authority (“Italian DPA”) ordered the temporary limitation on the processing of Italian users' data by OpenAI, following an inquiry that revealed the ChatGPT's non-compliance with the GDPR. This measure led OpenAI to deactivate its service throughout the Italian territory.
The main breaches identified by the Italian DPA relate, in particular, to: (i) the absence of a privacy notice for the users and the other data subjects whose data is collected; (ii) the insufficient establishment of a legal basis to justify the collection and storage of data used to nourish the chatbot; (iii) an inaccurate data processing, as the information provided by ChatGPT does not always turn out to be factual and (iv) the lack of an age verification mechanism.
The measures proposed by OpenAI
After several dialogues between OpenAI and the Italian DPA, the company has implemented a set of measures aimed to comply with the Authority's directives and to ensure the privacy of ChatGPT users. More specifically, OpenAI: (i) identified the performance of a contract as the legal basis for processing the data necessary for the service functionality and the legitimate interest as the legal basis for algorithm training purposes; (ii) published a privacy notice on its website to inform the data subjects of the category of data collected, the methods of data processing for the chatbot training and everyone's right to object to such processing. Additionally, OpenAI introduced a module that allows data subjects to exclude their conversations and the respective history from being used in the algorithm training; (iii) implemented mechanisms to enable data subjects to request the deletion of information they deemed incorrect and (iv) introduced a mandatory date of birth field on the service's registration page, with provisions to block registration for users over the age of 13.
The Italian DPA has accepted the measures implemented by OpenAI, recognizing the challenges to reconcile technological advancement with individuals' rights respect, and has consequently suspended the adopted measure. However, the substantive proceeding is currently pending to ascertain whether OpenAI has violated, and in what terms, the privacy rights of individuals residing in Italy.
The issue of privacy risks of the GAI is more relevant than ever. Following the ChatGPT case, the EDPB has established a task force to define a common set of principles that DPAs should adhere to when handling such cases. On its part, the Italian DPA has expressed its intention to continue monitoring GAI applications available online, emphasizing however its commitment to promote technological innovation without hindering its development.
Article provided by INPLP member: Chiara Agostini (RP Legal & Tax, Italy)
Dr. Tobias Höllwarth (Managing Director INPLP)