Introduction

Italy became the first Member State of the European Union to adopt national legislation – Law no. 132/2025 – under the framework of Regulation (EU) 2024/1689 (AI Act), which entered into force on the 1st of August 2024 and will be fully applicable on the 2nd of August 2026.

On October 10, 2025, Italy’s new Law no. 132/2025 on artificial intelligence came into effect. With this law, Italy aims on the one hand to harmonise its domestic legal framework with European Union law, and on the other to strengthen certain fundamental principles to ensure that the adoption and deployment of AI systems occur in accordance with the principles of transparency, proportionality, safety, protection of fundamental rights, and human dignity.

1. The regulatory and political framework

The legislative measure, originally proposed by the Italian Government on May 20, 2024, reflects a growing political awareness of the deep impact that technological development exerts on both individual and collective life. In this latter sphere, technological tools have the potential to accelerate efficiency across productive sectors and services, as well as to enable innovative and effective functioning within public infrastructures. The capacity to adapt and evolve these infrastructures to integrate such technologies thus translates into greater competitiveness, reduced public expenditure, and the opportunity to foster a “digital society” capable of keeping pace with global markets.

These considerations have been included in the National Strategic Plan for Artificial Intelligence 2024–2026, promoted by the Italian Ministry of Enterprises and Made in Italy, which served as a guiding framework for the development of the legislative bill.

During the parliamentary process, over four hundred amendments were proposed, and the bill underwent more than two hundred votes across parliamentary sessions and committees. Numerous hearings with public institutions and key stakeholders from the ICT sector contributed to shaping a comprehensive understanding of artificial intelligence as an evolutionary phase of digital transformation.

Following parliamentary iter, the bill was finally approved on September 17, 2025, with the aim of introducing a comprehensive national regulatory framework built upon a system of principles and governance measures tailored to the Italian context.

As an enabling act (legge delega) the law sets out general principles and guidelines while entrusting the Government with the task of issuing implementing decrees that will specify concrete obligations, operational modalities, and effective sanctions.

2. Art. 13 of Law No. 132/2025

Considering the above framework, it should be underlined that there is a concept that underpins both the EU/Italian regulation and the relationship between humans and machines: trust.

Trust is the invisible infrastructure of our digital world. It sustains the relationship between users and platforms, professionals and their clients, citizens and institutions.

In this context, Article 13 of Law No. 132/2025 aims to strengthen this concept of trust and, to this extent, introduces important provisions on intellectual professions, which explicitly link the use of AI to the principles of professional ethics, transparency, and responsibility.

In particular, the above provision states that the use of AI systems in intellectual professions (i) is limited to the performance of instrumental and support activities, while maintaining the prevalence of the intellectual work that constitutes the object of the professional engagement, and (ii) that in order to ensure the fiduciary relationship between professional and client, information relating to the AI systems used by the professional shall be communicated to the client in clear, simple, and comprehensive language.

Regarding the scope of the provision, the explanatory report accompanying the new legislation refers explicitly to Articles 2229–2238 of the Italian Civil Code, which govern contracts for the performance of intellectual work. The regulation thus applies to those professional engagements based on intellectual contribution, and not to contracts (such as publishing or licensing agreements) that merely involve the transfer or use of pre-existing intellectual works created without a specific professional mandate.
This provision captures with remarkable precision the core challenge of AI’s integration into professional practice: its potential impact on the personal nature of professional performance, and on the ethics and deontology that underpin the professions.

The law’s explicit requirement of information disclosure – a sort of “AI transparency duty” – is not a bureaucratic formality. It is a safeguard for trust.

Clients must know if and how AI is being used in the performance of professional services; otherwise, the fiduciary relationship that defines professional work, based on Article 2232 of the Italian Civil Code, risks being eroded, as clients would no longer be able to rely on the accountability of the professional.

It is precisely to prevent this erosion of trust that Article 13 draws a clear line: AI may assist, but it cannot replace the human intellect that characterizes intellectual professions. The human contribution remains the cornerstone of professional identity and the guarantee of ethical practice.

3. The ethical and deontological sphere

As stated above, the trust relationship between the professional and the client is protected through an obligation to inform the client, in simple and understandable terms, about the type, function, and limits of the AI tools used, as well as the security measures in place to protect confidentiality and personal data. Although there is currently no direct sanction for breaching this duty, the failure to inform can still be legally significant. It may amount to a violation of the duties of fairness, diligence, and transparency, and may lead to disciplinary consequences.

However, the provision’s generic language and lack of specific criteria make its interpretation challenging. It is not clear, for example, whether the duty to inform also applies to tools used for automated data analysis or legal research, or what level of AI use would be considered “significant” enough to require client disclosure.

The practical challenges of Article 13 also affect the ethical and professional (deontological) sphere. Updating professional codes will be essential, since artificial intelligence has already become an important part of daily work, improving organization, management, and efficiency. Refusing to use it would be unrealistic; failing to regulate it properly would be even more dangerous. Improper or careless use of AI can lead to errors in judgment, privacy breaches, or distorted legal reasoning, causing serious harm not only to the professional but also to the client – who may have no idea that AI was even used in their case. This shows the need to balance innovation with ethical and cognitive responsibility on the part of the professional.

Over time, only case law will be able to define how Article 13 should be applied and enforced, setting the boundaries between lawful and unlawful use and clarifying whether the duty of disclosure can serve as a measure of professional fault or ethical misconduct. This interpretation will likely evolve gradually, case by case, based on real-life situations and the specific nature of each profession. Because there are no previous precedents, courts will probably follow a similar path to that seen in past rulings on professional diligence in the medical and legal fields. It is worth noting that the Italian National Bar Council, on 13 October 2025, issued a model form for lawyers to use when informing clients about the use of AI, in direct implementation of Article 13.

Even before the law came into force, several Italian courts had already addressed the issue. The courts of Firenze, Torino, and ex multis Latina ruled on cases involving the use of AI tools in drafting legal documents. They linked negligent or overly automated behaviour to aggravated liability under Article 96 of the Italian Code of Civil Procedure. These decisions, although issued in different contexts, serve as important precedents. They establish a clear link between the misuse of technology and the violation of the principles of loyalty and professional diligence, recognizing the possibility of imposing even financial sanctions for careless use or lack of control over algorithmic outputs. They also emphasized the role of bad faith in such situations.

These cases may guide future interpretations of Article 13, highlighting the professional’s duty to maintain effective supervision over their work, even when assisted by AI.
Overall, this law sets out a modern and forward-looking framework. It introduces innovative principles but also leaves practical gaps that will require interpretation, clarification, and the steady input of judicial decisions.

The field of professional ethics must also evolve, adopting rules that both protect clients and make the most of what artificial intelligence can offer. Using AI consciously and transparently should now be seen not only as an ethical duty but as part of the professional’s core competence. Following the rule cannot just mean checking a box; it requires a real understanding of both the technical and legal aspects of the tool and the ability to integrate it responsibly into decision-making processes.

Ultimately, Article 13 of Law No. 132/2025 marks the beginning of a new era of professional responsibility, where artificial intelligence is no longer seen as something external but as part of professional life itself. Its true impact will depend on how judges and professional bodies apply it, interpret it, and balance innovation, transparency, and protection. The courts will play a decisive role, as will the professional orders, which must fill existing ethical gaps and give their members the knowledge they need to use AI responsibly. When used correctly, it can be a valuable ally; when misused, it can lead to serious – and sometimes unpredictable – consequences for both the professional and the client.

In this light, one question remains, one that is both legal and symbolic: when a professional is accused of relying too heavily on their algorithm, who will determine the extent of their fault, the ethical code or the source code? And, above all, on what evidence will the judgment be based: on the human conduct, or on the data produced by the machine they chose to use?

4. Professional Liability: “determinative” diligence and AI

In general terms, professional liability arises from the breach of an obligation connected with the exercise of a professional activity, that is, an activity of a non-occasional nature aimed at generating profit (e.g., business or intellectual work).

A crucial distinction between ordinary contractual performance and professional performance lies in the standard of diligence, as defined by Article 1176 of the Italian Civil Code.
While paragraph 1 requires the diligence of a “good family man,” paragraph 2 establishes a specific rule: when obligations relate to professional activities, diligence must be evaluated with regard to the nature of the activity performed.

This dual standard – ordinary vs. “determinative” diligence – reflects the idea that professionals must meet technical standards appropriate to their field. Their liability is not “aggravated” but rather qualified, since it depends on the technical expertise expected within their profession.

The Civil Code (Arts. 2229–2238) provides a dedicated framework for intellectual professions, which are characterized by three essential elements:
1) the rendering of an intellectual service;
2) autonomy and discretion in performing that service, even when within an employment relationship;
3) the personal nature of the work (Art. 2232 c.c.).

However, “personal” does not mean “exclusive”. Under the same article, professionals may rely on substitutes or assistants, provided that the latter are required to act under the former’s direction and responsibility.
Sector-specific legislation also confirms this interpretation of the law (e.g. Art. 47 Italian Notarial Law).

That is, of course, because the legislator also recognizes the high degree of expertise expected from intellectual professionals, who require a collective and technologically equipped environment to achieve the expected level of quality. The same rationale underlies professional practice rules. For instance, Article 14 of the Italian Lawyers’ Code of Practice provides that “in order to ensure the quality of professional performance, a lawyer shall not accept any assignment that they have no competence to perform adequately.”

The notion of competence also implies a duty of continuous professional updating, which is a deontological requirement for most professionals. For instance, Art. 15 of the Italian Lawyers’ Code of Practice provides that “a lawyer must constantly maintain and improve their professional preparation, preserving and enhancing their knowledge, with particular attention to their areas of specialization and main fields of practice”.

Today, the duty of updating necessarily extends to the technological dimension of professional practice, in which artificial intelligence plays a major role. Indeed, AI is part of something bigger than a mere update: it represents a digital revolution that transforms the way legal and intellectual services are shaped and delivered.

As already mentioned, the requirement of the “personal nature” of the work, as described above, has been transposed into Article 13 of Law 132/2025, which regulates the use of AI for intellectual professionals.

This provision reinforces the link between personal responsibility, quality, and competence, affirming the prevalence of human intellectual work.

“Prevalence”, as stated also in the preliminary documents to the law, has a qualitative meaning: it refers to human critical judgment and ultimate decision-making power.
Therefore, a lack of effective human oversight may constitute:

  • contractual breach (Art. 1218 Italian Civil Code);
  • professional fault (Art. 1176, paragraph 2, Italian Civil Code);
  • ethical misconduct (for lack of transparency or breach of trust).

As for the remedies, a correct storage of the client’s consent and evidence of the human contribution – such as annotated drafts, verified sources – seem adequate measures to meet the requirements of proof under Article 1218 of Italian Civil Code and to comply even with the doctrine of qualified social contact, in order to demonstrate an actual intellectual activity.

In conclusion, due to the emerging role of technology within professionals’ activities and the duty of continuous updating, digital competence can be regarded as part of the “determinative” diligence required by Article 1176, paragraph 2, Italian Civil Code, and professionals should develop adequate literacy to use it properly, complying also with Article 4 of the AI Act. However, AI still remains a tool, hence it cannot substitute the intellectual activity provided by the professional, who maintains full liability for failing to provide transparent disclosure to the client and, of course, for the decision-making.

5. Other EU Member States

From a comparative point of view, research on five other EU Member States – Spain, France, Germany, Finland and Ireland – indicates that none has adopted or is currently adopting requirements equivalent to the stringent, profession-specific obligations provided by Article 13 of Italian Law no. 132/2025.

On the 11th of March 2025, Spain’s government approved a draft bill and set a public hearing to hear about the proposal, which – if signed into law as is – will allow a designated authority to temporarily suspend and in the Spanish market an artificial intelligence system if it causes a serious incident, such as death of someone.

In fact, under the Spanish proposed bill’s Article 20, “responsables del despliegue” (in English, deployers) of AI systems would be liable for serious infringements, which are defined in the same Article 20 of the proposed bill, including (a) non-compliance with Article 50.3 of the AI Act, more specifically the duty of deployers of emotion recognition systems and biometric categorization systems to inform the natural persons exposed to them of the operation of the applicable system, and (b) non-compliance with Article 50.4 also of the AI Act, under which deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, must disclose that the content has been artificially generated or manipulated.

There are other provisions – under Spain’s proposed AI legislation on artificial intelligence – which may come to apply to deployers of high-risk AI systems if the proposed law becomes law, but again, nothing which seems directly applicable to professional service providers in the lines adopted by Italy.

Spain seemed more concerned about prohibiting other practices, including (a) subliminal techniques to manipulate decisions (e.g. from Spain’s government Website: a chatbot which identifies users suffering from gambling addiction and which chatbot incites those users to join an online gambling platform through subliminal techniques) and (b) biometric classification of people by ethnicity, political affiliation or sexual orientation (e.g. also from Spain’s government Website: a system which categorizes facial biometrics capable of deducing political affiliation or sexual orientation by analysis of the person’s pictures on social media).

In addition, Spain’s draft bill on artificial intelligence classifies AI systems risks as high risks when they can be used for safety in elevators, pressurised equipment and gas powered (among others), and also for biometry, critical infrastructure, among others, where human supervision is stated to be mandatory, and it will require, if signed into law, labelling of images, videos and sounds generated by artificial intelligence.

For violations of Spain’s artificial law, if it becomes effective and as per Spain’s draft law, Spain is looking at potential application of fines ranging from 7.5 to 35 million euros, or a range between 2% and 7% of the global turnover of an infringing firm for the preceding year if higher than 35 million euros, except that in the case of small and medium enterprises the potential fines should be smaller.

France also seems to be more concerned about fighting disinformation, deepfakes and, in connection with both, a solution of labelling AI-generated images which can go viral on social media. A bill on that was proposed in France by fifteen members of France’s Assemblée Nationale on December 3, 2024.

Finland, on the other hand, is looking at adopting legislation as per a draft bill which could create responsibility for a professional, as a “käyttöönottaja” (deployer) – but that is not hard law yet.

On 12th September 2025, Germany’s Bundesministerium für Digitales und Staatsmodernisierung (BMDS) announced on their Website that a draft bill and from its name, it is uncertain how far Germany’s proposed “Gesetz zur Durchführung der KI-Verordnung” (which translates as law for transposition of the AI Act) will go beyond the AI Act. The public announcement made by Germany, however, recognizes that Germany had not met the 2nd August 2025 deadline under the AI Act for the designation of national competent authorities for internal political reasons.

Four days after Germany’s press release on AI, it was Ireland’s turn to announce a “landmark progress in AI Act implementation”, “becoming one of the first six Member States to reach the critical milestone of designating the competent authorities which will be responsible for enforcement of the Act”. In fact, Ireland announced the designation of 15 National Competent Authorities under the AI Act and a “Single Point of Contact” to “interface with the public, other Member States, and EU-level counterparts”.

The absence of hard law passed by national legislatures of the sample of 5 other member states of the EU also needs to be analysed in light of adoption of softer sectoral rules of conduct in those countries. By means of example, Spain’s Ilustre Colegio de la Abogacía of Madrid (which is the equivalent of the Italian Ordine degli Avvocati) announced on October 7 2025 their first practical guide for responsible use of AI by Madrid-enrolled lawyers.

6. Conclusions

In conclusion, Law No. 132/2025 represents the first comprehensive legislative intervention by the Italian legal system in the field of artificial intelligence. It is hard law, and it situates itself within a framework of systematic continuity with Regulation (EU) 2024/1689 (AI Act) and marks a crucial step toward a national regulatory model grounded in ethics and responsibility for the use of AI.

Trust emerges as the guiding principle of the entire normative framework, conceived as a foundational element not only of the relationship between professional and client but also, more broadly, of the relationship between citizens, institutions, and technological systems.
Artificial intelligence is recognized as a tool of support to human work, not as its substitute: the intellectual and personal dimension of professional performance must remain predominant and irreplaceable.

From this perspective, the legislator’s goal is not to restrict innovation, but to guide it within a framework of ethics, responsibility, and legal awareness, in which technology functions as an enabling factor rather than a substitute agent of human autonomy.
This approach gives rise to a model of “assisted artificial intelligence,” wherein technology enhances intellectual activity without undermining its critical and fiduciary nature.
Within this context, Article 13 of Law No. 132/2025 introduces a principle-based discipline governing the use of artificial intelligence in intellectual professions, presenting itself as a norm of equilibrium between technological innovation and the protection of professional autonomy.

Thus, Law No. 132/2025 stands as the first European example of comprehensive national regulation on the application of artificial intelligence to the professions, reaffirming the centrality of the human person in the governance of technology and the need to preserve, even in the digital age, the cognitive and deontological dignity of intellectual work.

Adelaide Barbiero, Biancamaria Campasso, Augusto Cipriani, Roberto Di Cillo, Daniele Scelta.

Bunny Box | LegalTech Lab

Share this article!