When AI met privacy
One of the main issues of concern, as far as the development of Artificial Intelligence grows fast, has to do with the use that can be made of personal information through the application of algorithms and machine learning.
In fact, as the General Data Protection Regulation (GDPR) states, the data controller has to implement appropriate technical and organisational measures to ensure the protection of such personal data. Also, the controller must be able to probe that the processing activities carried out are in accordance with the law.
So, which options do we have in order to fulfil with these data protection obligations when using artificial intelligence?
In that sense, articles 40 and following of the GDPR refers to the need of managing the risk. To that end, the GDPR requests that the impact of the data processing must be mitigated through good practices, for instance in the form of approved codes of conduct, as well as other options, such as certifications or indications provided by a data protection representative.
So, as it can be seen, the European legislator seems to trust in the effectiveness of self-regulation, as a system to prove due responsibility during the processing of personal data.
It therefore encourages it, promoting the drawing of codes of conduct, allowing the companies themselves to establish the obligations of data controllers and processors, taking into account the likely risk to the rights and freedoms of natural persons.
In the case of Artificial Intelligence, this way of proceeding must be really interesting when it comes to managing the privacy risks of using such a complex technology, as only the companies involved in the development and application of this artificial intelligence know which are the main issues that it might raise in practice, as well as the problems, risks and threats, and possible controversies that the use of this technology can pose, derived from its application in the market.
Returning to article 40.2 of the GDPR, it can be seen how it contemplates several assumptions that may be of great interest for the purposes of achieving effective self-regulation of some of the issues that have been raised for some time in the field of artificial intelligence. To cite an example, the principles of loyalty and transparency set out in point a) of the aforementioned article, is one of the crucial points also in the development of regulation of artificial intelligence.
Regarding the fining aspects due to the wrong use of that technology, the voluntary assumption of codes of conduct by the offending entities is an element that must be taken into account when sanctioning data processing that infringes any of the data protection principles provided for in the regulation.
This reference is expressly included in Article 24.3 of the GDPR, when it states that adherence to approved codes of conduct may be used as an element to demonstrate compliance with obligations by the data controller, which may be of great help in modulating any liability that the regulator may require of the data controller at any given time.
Finally, it should also be noted that, with regard to the impact assessment on data protection, adherence to codes of conduct is an element that must be taken into account when assessing the impact of processing operations carried out by data controllers or processors. Some countries, as in the case of Spain, have included this provision in their internal regulations on data protection, so that the figure of the codes of conduct is expressly included in Organic Law 3/2018, on the Protection of Personal Data and the Guarantee of Digital Rights.
Article provided by INPLP member: Francisco Pérez Bes and Esmeralda Saracíbar (ECIX, Spain)
Discover more about INPLP, the INPLP-Members and the GDPR-FINE database
Dr. Tobias Höllwarth (Managing Director INPLP)