National AI legislation adopted in Italy: a first look at privacy implications

27.01.2026

Italy has become the first EU Member State to adopt a comprehensive national AI regulatory framework with Law No. 132 of 23 September 2025. The law adapts the Italian legal system to EU Regulation 2024/1689 (AI Act), introducing provisions that overlap with existing data protection regulation, particularly affecting healthcare data processing, worker information duties and transparency obligations in public administration.

1. Introduction: the anthropocentric approach

Law 132/2025 establishes guiding principles that must inspire the entire national regulatory framework, in line with the European AI Act, including transparency, proportionality, robustness, accuracy, non-discrimination, protection of personal data and fundamental rights, sustainability and human responsibility.

It is a framework law, which does not yet contain detailed technical rules, but entrusts the Government with the task of adopting a series of implementing decrees. At the heart of the law is a declaredly anthropocentric approach: AI is considered a tool to support humans, never a substitute for them.

The law contemplates, in addition to the declaration of principles, provisions that apply in various economic sectors overlapping with the regulation currently in force, especially as regards personal data protection.

 

2. Sector-specific rules

Law No. 132/2025  adopts a structured regulatory approach across several sectors in which AI can deliver tangible benefits to citizens and public institutions.

2.1 Healthcare and research

Within the regulatory framework outlined, AI is valued as a tool to support diagnosis, prevention and scientific research, without replacing the decision-making role of the doctor, who remains central to the treatment process.

In this context, Article 8 qualifies as being of significant public interest the processing of personal data, including special categories of data referred to in Article 9 of the GDPR, carried out by public and private entities, including IRCCS (Scientific Institutes for Research, Hospitalisation and Healthcare) and private entities participating in research projects. Such processing is aimed at creating AI systems for diagnosis and treatment, drug development and rehabilitation technologies, in compliance with European data protection guarantees.

The secondary use of such data, without direct identifying elements, is expressly authorised without the need for further consent from the data subject, if this was initially required by law. The obligation to provide adequate information, even in general form (e.g. via the organisation's website), remains unaffected, except where knowledge of the data subject's identity is unavoidable or necessary for health protection.

2.2 Labour

Article 11 states that AI must be used to improve workers' conditions, protect their physical and mental integrity and enhance the quality of their performance, never for the purpose of control or restriction of rights. The use of AI must be safe, reliable and transparent, without undermining human dignity or violating the confidentiality of personal data.

Employers are required to inform workers in advance about the use of AI systems, in accordance with the procedures set out in Article 1-bis of Legislative Decree 152/1997, entitled "Additional information requirements in the case of the use of automated decision-making or monitoring systems".

The Italian Data Protection Authority expressed critical views regarding this provision, observing that it did not expressly recall the protections provided by the GDPR (Articles 22(3) and 88, in particular) and the "Privacy Code" (Articles 113 and 114). The Authority considered the reference to Article 1-bis, Legislative Decree No. 152/1997 problematic as it refers "only to fully automated processing".

2.3 Public administration and justice

The use of AI systems in public administration is subject to compliance with principles of transparency, knowability of operation and traceability of use, for the protection of data subjects, as well as adoption of adequate technical, organisational and training measures aimed at ensuring responsible and informed use of the technologies.

In the judicial sphere, Article 15 sets out a key principle: even in the presence of AI systems, every decision remains the prerogative of the magistrate, who retains exclusive competence for the interpretation and application of the law, the assessment of facts and evidence, and the adoption of measures. This decision-making reserve clearly delimits the scope of AI use, ensuring that algorithmic tools do not affect the essential prerogatives of the judicial function or the independence of the judiciary.

 

3.  Criminal safeguards against AI abuse

Article 26 introduces new types of offences and specific aggravating circumstances related to the use of AI systems. The provision transforms artificial intelligence into a relevant element for the purposes of criminal liability.

Article 61 of the Italian Criminal Code is supplemented with a new general aggravating circumstance: the use of artificial intelligence systems becomes an aggravating circumstance when it constitutes an insidious means, or hinders public or private defence, or when it aggravates the consequences of the offence.

The new Article 612-quater of the Italian Criminal Code defines the offence of unlawful dissemination of content generated or altered using artificial intelligence systems, targeting the non-consensual dissemination of falsified images, videos or voices, capable of misleading as to their authenticity, when such conduct results in unjust damage to the offended person. This is a direct response to the growing spread of deepfakes, with enhanced protection in cases involving vulnerable individuals or public authorities.

Overall, Article 26 sends an unequivocal message to companies, developers and users of artificial intelligence systems: technological innovation cannot translate into a free zone from a criminal law perspective. The use of AI therefore becomes a real legal risk factor, requiring the adoption of adequate compliance, technological governance and algorithm control measures.

 

4. Investments for innovation and competitiveness

To provide concrete support for the adoption of AI, the measure also activates a €1 billion investment programme for start-ups and SMEs operating in the fields of artificial intelligence, cyber security and emerging technologies, strengthening the technological development of strategic supply chains with a high social impact.

This investment strategy aims to position Italy as a competitive player in the European AI landscape whilst ensuring that innovation develops within the anthropocentric and rights-protective framework established by the law.

 

5. Conclusion

Law No. 132/2025 presents multiple profiles of interest and peculiarities such as to fuel the already substantial doctrinal discussions that normally welcome provisions which, as in this case, can be said to be historic. For a complete evaluation it will be necessary to await implementing acts and especially the test of concrete experience.

For privacy law practitioners, the legislation creates a complex multi-layered compliance environment where it will not be easy to identify which guarantees to apply in the individual concrete case. For managers, entrepreneurs and industry professionals, the challenge will be to transform constraints and risks into competitive opportunities: those who know how to integrate ethics, compliance and innovation, anticipating the rules, will have a strategic advantage.

 

Article provided by INPLP member: Chiara Agostini  (RP Legal & Tax, Italy)

 

 

Discover more about the INPLP and the INPLP-Members

Dr. Tobias Höllwarth (Managing Director INPLP)

What is the INPLP?

INPLP is a not-for-profit international network of qualified professionals providing expert counsel on legal and compliance issues relating to data privacy and associated matters. INPLP provides targeted and concise guidance, multi-jurisdictional views and practical information to address the ever-increasing and intensifying field of data protection challenges. INPLP fulfils its mission by sharing know-how, conducting joint research into data processing practices and engaging proactively in international cooperation in both the private and public sectors.