Regulation of artificial intelligence in law

November 26, 2025

Artificial intelligence (AI) represents one of the most profound technological revolutions in recent history, with the potential to redefine society, the economy, and human interactions. Its ability to process large volumes of data, identify complex patterns, and perform tasks once exclusive to the human mind has generated transformative impacts across multiple sectors. From medicine, with more accurate diagnoses, to industry, with process optimization, AI is reshaping the way we live and work. At the epicenter of this transformation, the legal sector stands out as a particularly fertile ground for AI applications and, consequently, for AI regulation.

Law has always relied on specialized knowledge, the analysis of complex texts, logical reasoning, and human judgment. However, the rise of AI, especially with the advent of generative models and Large Language Models (LLMs), challenges these assumptions and introduces new tools and methodologies, promising to enhance efficiency, accessibility, and potentially fairness within the legal system. The automation of repetitive tasks, advanced legal research, predictive analysis of litigation outcomes, and assistance in drafting documents are just a few examples of how AI is already permeating legal practice.

Nevertheless, this technological revolution does not come without challenges. Complex ethical issues, concerns about privacy and data security, the risk of algorithmic bias, and the need to ensure human oversight and accountability are central themes that demand careful attention and regulation. Hallucinations and biases have been major obstacles for legal professionals, whether in public office, law firms, or in-house legal departments.

Brazil, recognizing the urgency and importance of addressing these challenges, has taken a leading role in regulating AI within the legal context. Initiatives such as Recommendation #001/2024 from the Brazilian Bar Association (OAB) and Rule #615/2025 from Brazil's  National Council of Justice (CNJ) represent significant milestones in this effort, aiming to establish clear guidelines for the ethical, responsible, and effective use of AI by legal professionals and within the judicial system.

The Artificial Intelligence Revolution and Its General Applications

Artificial intelligence, at its core, refers to the ability of machines to simulate human intelligence, performing tasks such as learning, reasoning, perception, and decision-making. The evolution of AI has been exponential, driven by advances in algorithms, computing power, and data availability. Key features of modern AI include Machine Learning (ML), where systems learn from data without being explicitly programmed; Deep Learning (DL), a subfield of machine learning that uses multi-layered artificial neural networks to process complex information; and Natural Language Processing (NLP), which enables machines to understand, interpret, and generate human language.

These technologies have found applications across a wide range of economic sectors, transforming processes and creating new opportunities for innovation. In healthcare, AI assists in diagnosis, drug discovery, and personalized treatments. In the financial sector, it is important for fraud detection, risk analysis, and algorithmic trading. In retail, it personalizes product recommendations and optimizes inventory management. In the automotive industry, it underpins autonomous vehicles and advanced driver-assistance systems. Agriculture benefits from AI through crop monitoring and resource optimization.

The expected impacts of AI on society are substantial. Economically, AI promises to boost productivity and efficiency, driving growth and sustainable development. However, it also raises concerns about job automation and the need to retrain the workforce. Socially, AI has the potential to improve quality of life by offering solutions to complex challenges, but it may also exacerbate inequalities if not implemented equitably. Privacy and data protection emerge as critical issues, given the vast amount of information processed by AI systems.

Artificial Intelligence in the Legal Sector: Transformations and Opportunities

The introduction of AI into the legal sector marks one of the most significant shifts in the legal practice since the invention of the printing press. Far from being just an auxiliary tool, AI is reshaping how lawyers, judges, prosecutors, and other legal professionals carry out their work, opening up opportunities for greater efficiency, optimization,  and potentially broader access to justice.

One of the prominent applications of AI in law is task automation. Legal research, traditionally time-consuming and resource-intensive, is being transformed by AI systems capable of scanning vast databases of legislation, case law, and doctrine in record time, identifying relevant precedents and patterns that would be nearly impossible to detect manually. Document analysis tools can review contracts, briefs, and other legal texts, flagging clauses, inconsistencies, or risks, and dramatically reducing the time required for due diligence and contract review. Furthermore, generative AI assists in the drafting of legal documents, memos, and opinions by producing initial drafts that are subsequently refined by human professionals, allowing them to focus on more strategic and high-value tasks.

Data analytics and litigation outcome prediction represent another area of transformative potential. By analyzing historical litigation data, such as case types, arguments, judge profiles, and outcomes, AI algorithms can provide predictive insights into case success rates, expected compensation, or probable duration of proceedings. This “predictive justice” capability helps attorneys develop procedural strategies, negotiate settlements, and advise clients, leading to more informed, evidence-based decisions.

AI also offers support in judicial decision-making. Although final rulings must remain with human judges, AI systems can, for example, identify conflicting precedents, summarize complex arguments, and flag potential biases in prior decisions. This can enhance consistency and fairness in judgements while accelerating trial processes.

The impact of AI on judicial efficiency is undeniable. Automation of administrative tasks, intelligent case triage, and optimized resource allocation can reduce backlogs, cut operational costs, and speed up dispute resolution, making access to justice faster and more affordable.

However, integrating AI into the legal sector raises fundamental ethical challenges. Algorithmic transparency remains an open question: how are decisions made, and what data is used? A lack of clarity can erode trust in the justice system. Accountability is another key critical issue: who bears responsibility for errors or harm caused by AI systems? Should liability rest on the developer, the end user, or the system itself? Data security and privacy are paramount, given the sensitive nature of legal information.

One of the most pressing concerns is algorithmic bias, which arises when training data reflects historical and social prejudices. In such cases, AI can perpetuate or even amplify such biases, producing discriminatory outcomes. This is particularly serious in a legal context that upholds justice and fairness as core principles. AI could, for example, replicate racial or socioeconomic biases embedded in past rulings, compromising impartiality.

For the legal field, AI represents both a challenge and an opportunity. Automating routine tasks can free attorneys to focus on more strategic and creative aspects of law, but it also demands continuous retraining and adaptation. The ability to work with and manage AI tools will become an essential skill. For judicial institutions, the challenge is to integrate AI while safeguarding fundamental legal principles, ensuring human oversight, and maintaining public trust, while leveraging efficiency and modernization. Regulation is therefore crucial to guide this transition, ensuring AI serves the cause of justice rather than undermining it.

Recommendation #001/2024 from the Brazilian Bar Association

The Brazilian Bar Association (OAB), acknowledging the growing impact of artificial intelligence on legal practice, issued Recommendation #001/2024. The document seeks to guide legal professionals on the ethical and responsible use of these technologies. The context and purpose of the Recommendation are clear: given the rapid evolution of AI, especially generative models, the OAB recognizes the need to establish guidelines ensuring that technological innovation aligns with ethical principles and the prerogatives of the legal profession, safeguarding client interests and the integrity of the legal system.

The legal foundations of the Recommendation are robust, grounded in the pillars of the Brazilian legal system and the regulation of the legal profession itself: the OAB Statute (Law #8.906/1994), which defines the prerogatives and duties of attorneys at law; the Code of Ethics and Discipline of the OAB, which sets standards of professional conduct; the General Data Protection Act (LGPD – Law #13,709/2018), which ensures privacy and personal data protection; and the Brazilian Code of Civil Procedure (CPC – Law #13,105/2015), which governs procedural action. Furthermore, the Recommendation aligns with international standards, such as the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) and UN resolutions, reinforcing Brazil's commitment to the global debate on AI governance.

The main provisions and guidelines of Recommendation #001/2024 are comprehensive and detailed, addressing the most critical aspects of AI use in law:

a) Client data confidentiality and protection: The Recommendation reaffirms that confidentiality is a non-negotiable pillar of the attorney-client relationship. It expressly prohibits the sharing of client data with AI providers for model training purposes, except with specific, explicit informed consent. This aims to prevent sensitive information from being absorbed and potentially disclosed or misused by AI systems.

b) Human supervision and responsibility: AI is defined as a supporting tool, never a substitute for the professional judgment of an attorney at law. Ultimate liability for any legal act or decision, even when assisted by AI, remains entirely with the human professional, ensuring that the autonomy and ethics of the legal profession remain central to practice.

c) Mandatory verification of information generated by AI: Due to the probabilistic and sometimes hallucinatory nature of generative AI models, the Recommendation establishes the obligation for attorneys at law to verify and validate all outputs - information, arguments, doctrines, and case law, generated or suggested by AI systems. The responsibility for the accuracy and reliability of legal content rests with the attorney at law.

d) Transparency and informed consent from the client: It is imperative that the attorney inform the client in advance about the use of AI tools in their case. This consent must be formalized in writing prior to the use of such technology. Transparency is essential for building and maintaining trust in professional relationships.

e) Formalization content: The consent form must be detailed, explaining the purpose of AI use, expected benefits, limitations of the technology, risks of inaccuracy or data exposure, and security measures adopted to protect client information.

f) Professional competence and continuous learning: The Recommendation highlights the need for lawyers to develop digital skills and remain up to date on AI technologies. Continuous learning is essential for the safe and affective use of AI, understanding its capabilities and limitations.

g) Managerial responsibility in law firms: Law firms and legal societies should implement clear internal policies for AI use, ensuring that all professionals follow ethical and data security standards. This involves staff training and adoption of secure technologies.

h) Specifics in litigation activities: In litigation, the Recommendation calls for heightened caution, especially when it comes to presenting arguments or evidence generated by AI, which should always be reviewed and validated by the attorney at law.

The practical implications for attorneys and law firms are substantial. The Recommendation calls for a mindset shift and the adoption of new work protocols. Attorneys at law will need to be more diligent in verifying information, more transparent with clients, and more proactive in seeking AI training.

Offices should invest in secure infrastructure, develop robust internal policies, and foster a culture of responsible technology use.

To implement the Recommendation in daily practice, the creation of standardized consent forms, regular staff training, adoption of AI tools with security and privacy guarantees, and appointment of individuals responsible for AI governance within law firms is suggested. OAB Recommendation #001/2024 is not limited to an ethical guide; it constitutes a call to adapt processes and responsibilities, ensuring that Brazilian law can embrace technological innovation without compromising its fundamental values.

Rule #615/2025 of the Brazilian Council of Justice

Rule #615/2025 of the CNJ represents a key regulatory milestone for the use of artificial intelligence in the Brazilian judicial system. Published in a context of rapidly evolving AI, especially with the introduction of generative models, this Rule updates and expands the guidelines established by Rule #332/2020, which proved insufficient in the face of new technological advances. Its purpose is to establish a robust regulatory framework that enables innovation and efficiency in Brazilian courts through AI, while protecting fundamental rights, ensuring transparency and accountability, and mitigating the risks associated with these technologies.

This Rule is based on fundamental principles that guide the implementation of AI in the Judiciary:

a) Human supervision and responsibility: Just as in the OAB Recommendation, the CNJ's Rule reaffirms that AI should be used as a supporting tool, never replacing human judgment. The final decision and responsibility always rest with the magistrate or civil servant.

b) Risk-based approach: The Rule adopts risk categorization for AI applications, dividing them into excessive, high, and low risk, which allows for proportionate regulation, focusing on the areas with the greatest impact. Solutions that pose excessive risk are prohibited, while high-risk solutions require rigorous impact assessments and the implementation of additional safety measures.

c) Protection of fundamental rights and data (LGPD): The Rule emphasizes compliance with the General Data Protection Act (LGPD) and the guarantee of other fundamental rights, such as non-discrimination, human dignity, due legal process, and personal data privacy.

d) Transparency, auditability, and explainability: AI systems must operate with complete transparency, allowing their decisions to be audited and understood. This is crucial for preserving public trust in the justice system.

e) Data quality and security: The Rule requires that the data used to train and operate AI systems be of high quality, accurate, and protected against errors, minimizing the risk of bias and errors in the results.

To ensure the effectiveness of these principles, the Rule defines clear implementation mechanisms, including the creation of monitoring committees and platforms:

a) Brazilian Committee for AI the Judiciary (CNIAJ): The CNIAJ was created, composed of a multidisciplinary team responsible for formulating policies, issuing guidelines, overseeing the implementation of AI, and ensuring compliance with the Rule. Their responsibilities include registering and evaluating AI solutions.

b) Sinapses Platform: The existing Sinapses Platform is designated as the central environment for the registration, testing, training, distribution, and auditing of AI solutions used in the Judiciary. This ensures centralized control and monitoring capabilities.

c) Algorithmic Impact Assessment (AIA): For high-risk AI solutions, an Algorithmic Impact Assessment is mandatory, which must identify, evaluate, and mitigate risks to fundamental rights and equity.

d) Data curation and privacy by design/default: The Rule requires the adoption of rigorous data curation practices and the implementation of privacy by design and privacy by default principles in all AI solutions.

The Rule also details specific guidelines and prohibitions:

a) Prohibited uses of AI: Uses of AI that imply absolute dependence on the machine for judicial decisions, crime prediction based on individual profiles, or social classification of individuals are expressly prohibited. These prohibitions aim to protect human autonomy and prevent the perpetuation of injustices.

b) Use and contracting of LLMs and generative AI: The Rule establishes specific requirements for the use and contracting of LLMs and other generative AI tools, considering their risks and benefits.

c) Requirements for courts hiring private LLMs: Courts that decide to contract LLMs from private providers must ensure that confidential data is not used for model training, that robust confidentiality clauses exist, and that data sovereignty is preserved.

d) Mandatory registration in Sinapses and publication of impact assessments: All AI solutions, whether developed internally or contracted, must be registered on the Sinapses Platform, with Algorithmic Impact Assessments published to ensure transparency.

e) Public versus confidential data: The Rule establishes differences in the treatment of public and confidential data, imposing stricter safeguards for confidential data, especially concerning the training of AI models.

f) User control and autonomy: AI solutions should be designed to ensure user control and human operator autonomy, with the operator being able to intervene, correct, and override suggestions made by the machine.

The Rule establishes a twelve-month implementation schedule for courts to adapt to the new rules, demonstrating the urgency and seriousness with which the CNJ is addressing the issue.

The implications for judges, civil servants, and other actors in the system are significant. Judges and civil servants will need to be trained to use AI tools ethically and effectively, understanding their limitations and the need for human oversight. The organizational culture of the Judiciary will need to adapt to incorporate AI as an aid, without losing sight of the values of justice and equity. The potential impact of modernizing the Judiciary is immense, promising greater efficiency, speed, and, ultimately, more equitable and transparent access to justice for all citizens.

Rule #615/2025 positions Brazil as a leader in AI governance in the public sector, seeking a balance between technological innovation and the protection of fundamental rights.

Prospects indicate a growing convergence between technology and law. Increasingly sophisticated AI systems are expected to be developed, capable of performing more complex legal analyses, predicting outcomes with greater accuracy, and even assisting in the formulation of new laws. The need to harmonize regulations between Brazil and the international landscape will become increasingly urgent as AI becomes a global technology. International cooperation will be essential to establish ethical and technical standards that ensure effectiveness and equity in the use of AI.

Reflection on the role of attorneys at law and judges in the age of AI is becoming increasingly relevant. Far from being replaced, these professionals will have their roles redefined. Attorneys tend to become legal architects, using AI to optimize their research and writing, but maintaining their focus on strategy, persuasive argumentation, and the human relationship with clients. Judges, in turn, will continue to be responsible for administering justice, using AI as an auxiliary tool in case analysis, but preserving the supremacy of human will and the ultimate responsibility for decisions. The challenge will be to ensure that ultimate responsibility remains with humans, maintaining the supremacy of human will as a fundamental pillar of the justice system. AI should be a tool to improve justice, not to dehumanize it.

RECENT POSTS

LINKEDIN FEED

Newsletter

Register your email and receive our updates

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

FOLLOW US ON SOCIAL MEDIA

Newsletter

Register your email and receive our updates-

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

FOLLOW US ON SOCIAL MEDIA

Licks Attorneys' Government Affairs & International Relations Blog

Doing Business in Brazil: Political and economic landscape

Licks Attorneys' COMPLIANCE Blog

Regulation of artificial intelligence in law

No items found.

Artificial intelligence (AI) represents one of the most profound technological revolutions in recent history, with the potential to redefine society, the economy, and human interactions. Its ability to process large volumes of data, identify complex patterns, and perform tasks once exclusive to the human mind has generated transformative impacts across multiple sectors. From medicine, with more accurate diagnoses, to industry, with process optimization, AI is reshaping the way we live and work. At the epicenter of this transformation, the legal sector stands out as a particularly fertile ground for AI applications and, consequently, for AI regulation.

Law has always relied on specialized knowledge, the analysis of complex texts, logical reasoning, and human judgment. However, the rise of AI, especially with the advent of generative models and Large Language Models (LLMs), challenges these assumptions and introduces new tools and methodologies, promising to enhance efficiency, accessibility, and potentially fairness within the legal system. The automation of repetitive tasks, advanced legal research, predictive analysis of litigation outcomes, and assistance in drafting documents are just a few examples of how AI is already permeating legal practice.

Nevertheless, this technological revolution does not come without challenges. Complex ethical issues, concerns about privacy and data security, the risk of algorithmic bias, and the need to ensure human oversight and accountability are central themes that demand careful attention and regulation. Hallucinations and biases have been major obstacles for legal professionals, whether in public office, law firms, or in-house legal departments.

Brazil, recognizing the urgency and importance of addressing these challenges, has taken a leading role in regulating AI within the legal context. Initiatives such as Recommendation #001/2024 from the Brazilian Bar Association (OAB) and Rule #615/2025 from Brazil's  National Council of Justice (CNJ) represent significant milestones in this effort, aiming to establish clear guidelines for the ethical, responsible, and effective use of AI by legal professionals and within the judicial system.

The Artificial Intelligence Revolution and Its General Applications

Artificial intelligence, at its core, refers to the ability of machines to simulate human intelligence, performing tasks such as learning, reasoning, perception, and decision-making. The evolution of AI has been exponential, driven by advances in algorithms, computing power, and data availability. Key features of modern AI include Machine Learning (ML), where systems learn from data without being explicitly programmed; Deep Learning (DL), a subfield of machine learning that uses multi-layered artificial neural networks to process complex information; and Natural Language Processing (NLP), which enables machines to understand, interpret, and generate human language.

These technologies have found applications across a wide range of economic sectors, transforming processes and creating new opportunities for innovation. In healthcare, AI assists in diagnosis, drug discovery, and personalized treatments. In the financial sector, it is important for fraud detection, risk analysis, and algorithmic trading. In retail, it personalizes product recommendations and optimizes inventory management. In the automotive industry, it underpins autonomous vehicles and advanced driver-assistance systems. Agriculture benefits from AI through crop monitoring and resource optimization.

The expected impacts of AI on society are substantial. Economically, AI promises to boost productivity and efficiency, driving growth and sustainable development. However, it also raises concerns about job automation and the need to retrain the workforce. Socially, AI has the potential to improve quality of life by offering solutions to complex challenges, but it may also exacerbate inequalities if not implemented equitably. Privacy and data protection emerge as critical issues, given the vast amount of information processed by AI systems.

Artificial Intelligence in the Legal Sector: Transformations and Opportunities

The introduction of AI into the legal sector marks one of the most significant shifts in the legal practice since the invention of the printing press. Far from being just an auxiliary tool, AI is reshaping how lawyers, judges, prosecutors, and other legal professionals carry out their work, opening up opportunities for greater efficiency, optimization,  and potentially broader access to justice.

One of the prominent applications of AI in law is task automation. Legal research, traditionally time-consuming and resource-intensive, is being transformed by AI systems capable of scanning vast databases of legislation, case law, and doctrine in record time, identifying relevant precedents and patterns that would be nearly impossible to detect manually. Document analysis tools can review contracts, briefs, and other legal texts, flagging clauses, inconsistencies, or risks, and dramatically reducing the time required for due diligence and contract review. Furthermore, generative AI assists in the drafting of legal documents, memos, and opinions by producing initial drafts that are subsequently refined by human professionals, allowing them to focus on more strategic and high-value tasks.

Data analytics and litigation outcome prediction represent another area of transformative potential. By analyzing historical litigation data, such as case types, arguments, judge profiles, and outcomes, AI algorithms can provide predictive insights into case success rates, expected compensation, or probable duration of proceedings. This “predictive justice” capability helps attorneys develop procedural strategies, negotiate settlements, and advise clients, leading to more informed, evidence-based decisions.

AI also offers support in judicial decision-making. Although final rulings must remain with human judges, AI systems can, for example, identify conflicting precedents, summarize complex arguments, and flag potential biases in prior decisions. This can enhance consistency and fairness in judgements while accelerating trial processes.

The impact of AI on judicial efficiency is undeniable. Automation of administrative tasks, intelligent case triage, and optimized resource allocation can reduce backlogs, cut operational costs, and speed up dispute resolution, making access to justice faster and more affordable.

However, integrating AI into the legal sector raises fundamental ethical challenges. Algorithmic transparency remains an open question: how are decisions made, and what data is used? A lack of clarity can erode trust in the justice system. Accountability is another key critical issue: who bears responsibility for errors or harm caused by AI systems? Should liability rest on the developer, the end user, or the system itself? Data security and privacy are paramount, given the sensitive nature of legal information.

One of the most pressing concerns is algorithmic bias, which arises when training data reflects historical and social prejudices. In such cases, AI can perpetuate or even amplify such biases, producing discriminatory outcomes. This is particularly serious in a legal context that upholds justice and fairness as core principles. AI could, for example, replicate racial or socioeconomic biases embedded in past rulings, compromising impartiality.

For the legal field, AI represents both a challenge and an opportunity. Automating routine tasks can free attorneys to focus on more strategic and creative aspects of law, but it also demands continuous retraining and adaptation. The ability to work with and manage AI tools will become an essential skill. For judicial institutions, the challenge is to integrate AI while safeguarding fundamental legal principles, ensuring human oversight, and maintaining public trust, while leveraging efficiency and modernization. Regulation is therefore crucial to guide this transition, ensuring AI serves the cause of justice rather than undermining it.

Recommendation #001/2024 from the Brazilian Bar Association

The Brazilian Bar Association (OAB), acknowledging the growing impact of artificial intelligence on legal practice, issued Recommendation #001/2024. The document seeks to guide legal professionals on the ethical and responsible use of these technologies. The context and purpose of the Recommendation are clear: given the rapid evolution of AI, especially generative models, the OAB recognizes the need to establish guidelines ensuring that technological innovation aligns with ethical principles and the prerogatives of the legal profession, safeguarding client interests and the integrity of the legal system.

The legal foundations of the Recommendation are robust, grounded in the pillars of the Brazilian legal system and the regulation of the legal profession itself: the OAB Statute (Law #8.906/1994), which defines the prerogatives and duties of attorneys at law; the Code of Ethics and Discipline of the OAB, which sets standards of professional conduct; the General Data Protection Act (LGPD – Law #13,709/2018), which ensures privacy and personal data protection; and the Brazilian Code of Civil Procedure (CPC – Law #13,105/2015), which governs procedural action. Furthermore, the Recommendation aligns with international standards, such as the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) and UN resolutions, reinforcing Brazil's commitment to the global debate on AI governance.

The main provisions and guidelines of Recommendation #001/2024 are comprehensive and detailed, addressing the most critical aspects of AI use in law:

a) Client data confidentiality and protection: The Recommendation reaffirms that confidentiality is a non-negotiable pillar of the attorney-client relationship. It expressly prohibits the sharing of client data with AI providers for model training purposes, except with specific, explicit informed consent. This aims to prevent sensitive information from being absorbed and potentially disclosed or misused by AI systems.

b) Human supervision and responsibility: AI is defined as a supporting tool, never a substitute for the professional judgment of an attorney at law. Ultimate liability for any legal act or decision, even when assisted by AI, remains entirely with the human professional, ensuring that the autonomy and ethics of the legal profession remain central to practice.

c) Mandatory verification of information generated by AI: Due to the probabilistic and sometimes hallucinatory nature of generative AI models, the Recommendation establishes the obligation for attorneys at law to verify and validate all outputs - information, arguments, doctrines, and case law, generated or suggested by AI systems. The responsibility for the accuracy and reliability of legal content rests with the attorney at law.

d) Transparency and informed consent from the client: It is imperative that the attorney inform the client in advance about the use of AI tools in their case. This consent must be formalized in writing prior to the use of such technology. Transparency is essential for building and maintaining trust in professional relationships.

e) Formalization content: The consent form must be detailed, explaining the purpose of AI use, expected benefits, limitations of the technology, risks of inaccuracy or data exposure, and security measures adopted to protect client information.

f) Professional competence and continuous learning: The Recommendation highlights the need for lawyers to develop digital skills and remain up to date on AI technologies. Continuous learning is essential for the safe and affective use of AI, understanding its capabilities and limitations.

g) Managerial responsibility in law firms: Law firms and legal societies should implement clear internal policies for AI use, ensuring that all professionals follow ethical and data security standards. This involves staff training and adoption of secure technologies.

h) Specifics in litigation activities: In litigation, the Recommendation calls for heightened caution, especially when it comes to presenting arguments or evidence generated by AI, which should always be reviewed and validated by the attorney at law.

The practical implications for attorneys and law firms are substantial. The Recommendation calls for a mindset shift and the adoption of new work protocols. Attorneys at law will need to be more diligent in verifying information, more transparent with clients, and more proactive in seeking AI training.

Offices should invest in secure infrastructure, develop robust internal policies, and foster a culture of responsible technology use.

To implement the Recommendation in daily practice, the creation of standardized consent forms, regular staff training, adoption of AI tools with security and privacy guarantees, and appointment of individuals responsible for AI governance within law firms is suggested. OAB Recommendation #001/2024 is not limited to an ethical guide; it constitutes a call to adapt processes and responsibilities, ensuring that Brazilian law can embrace technological innovation without compromising its fundamental values.

Rule #615/2025 of the Brazilian Council of Justice

Rule #615/2025 of the CNJ represents a key regulatory milestone for the use of artificial intelligence in the Brazilian judicial system. Published in a context of rapidly evolving AI, especially with the introduction of generative models, this Rule updates and expands the guidelines established by Rule #332/2020, which proved insufficient in the face of new technological advances. Its purpose is to establish a robust regulatory framework that enables innovation and efficiency in Brazilian courts through AI, while protecting fundamental rights, ensuring transparency and accountability, and mitigating the risks associated with these technologies.

This Rule is based on fundamental principles that guide the implementation of AI in the Judiciary:

a) Human supervision and responsibility: Just as in the OAB Recommendation, the CNJ's Rule reaffirms that AI should be used as a supporting tool, never replacing human judgment. The final decision and responsibility always rest with the magistrate or civil servant.

b) Risk-based approach: The Rule adopts risk categorization for AI applications, dividing them into excessive, high, and low risk, which allows for proportionate regulation, focusing on the areas with the greatest impact. Solutions that pose excessive risk are prohibited, while high-risk solutions require rigorous impact assessments and the implementation of additional safety measures.

c) Protection of fundamental rights and data (LGPD): The Rule emphasizes compliance with the General Data Protection Act (LGPD) and the guarantee of other fundamental rights, such as non-discrimination, human dignity, due legal process, and personal data privacy.

d) Transparency, auditability, and explainability: AI systems must operate with complete transparency, allowing their decisions to be audited and understood. This is crucial for preserving public trust in the justice system.

e) Data quality and security: The Rule requires that the data used to train and operate AI systems be of high quality, accurate, and protected against errors, minimizing the risk of bias and errors in the results.

To ensure the effectiveness of these principles, the Rule defines clear implementation mechanisms, including the creation of monitoring committees and platforms:

a) Brazilian Committee for AI the Judiciary (CNIAJ): The CNIAJ was created, composed of a multidisciplinary team responsible for formulating policies, issuing guidelines, overseeing the implementation of AI, and ensuring compliance with the Rule. Their responsibilities include registering and evaluating AI solutions.

b) Sinapses Platform: The existing Sinapses Platform is designated as the central environment for the registration, testing, training, distribution, and auditing of AI solutions used in the Judiciary. This ensures centralized control and monitoring capabilities.

c) Algorithmic Impact Assessment (AIA): For high-risk AI solutions, an Algorithmic Impact Assessment is mandatory, which must identify, evaluate, and mitigate risks to fundamental rights and equity.

d) Data curation and privacy by design/default: The Rule requires the adoption of rigorous data curation practices and the implementation of privacy by design and privacy by default principles in all AI solutions.

The Rule also details specific guidelines and prohibitions:

a) Prohibited uses of AI: Uses of AI that imply absolute dependence on the machine for judicial decisions, crime prediction based on individual profiles, or social classification of individuals are expressly prohibited. These prohibitions aim to protect human autonomy and prevent the perpetuation of injustices.

b) Use and contracting of LLMs and generative AI: The Rule establishes specific requirements for the use and contracting of LLMs and other generative AI tools, considering their risks and benefits.

c) Requirements for courts hiring private LLMs: Courts that decide to contract LLMs from private providers must ensure that confidential data is not used for model training, that robust confidentiality clauses exist, and that data sovereignty is preserved.

d) Mandatory registration in Sinapses and publication of impact assessments: All AI solutions, whether developed internally or contracted, must be registered on the Sinapses Platform, with Algorithmic Impact Assessments published to ensure transparency.

e) Public versus confidential data: The Rule establishes differences in the treatment of public and confidential data, imposing stricter safeguards for confidential data, especially concerning the training of AI models.

f) User control and autonomy: AI solutions should be designed to ensure user control and human operator autonomy, with the operator being able to intervene, correct, and override suggestions made by the machine.

The Rule establishes a twelve-month implementation schedule for courts to adapt to the new rules, demonstrating the urgency and seriousness with which the CNJ is addressing the issue.

The implications for judges, civil servants, and other actors in the system are significant. Judges and civil servants will need to be trained to use AI tools ethically and effectively, understanding their limitations and the need for human oversight. The organizational culture of the Judiciary will need to adapt to incorporate AI as an aid, without losing sight of the values of justice and equity. The potential impact of modernizing the Judiciary is immense, promising greater efficiency, speed, and, ultimately, more equitable and transparent access to justice for all citizens.

Rule #615/2025 positions Brazil as a leader in AI governance in the public sector, seeking a balance between technological innovation and the protection of fundamental rights.

Prospects indicate a growing convergence between technology and law. Increasingly sophisticated AI systems are expected to be developed, capable of performing more complex legal analyses, predicting outcomes with greater accuracy, and even assisting in the formulation of new laws. The need to harmonize regulations between Brazil and the international landscape will become increasingly urgent as AI becomes a global technology. International cooperation will be essential to establish ethical and technical standards that ensure effectiveness and equity in the use of AI.

Reflection on the role of attorneys at law and judges in the age of AI is becoming increasingly relevant. Far from being replaced, these professionals will have their roles redefined. Attorneys tend to become legal architects, using AI to optimize their research and writing, but maintaining their focus on strategy, persuasive argumentation, and the human relationship with clients. Judges, in turn, will continue to be responsible for administering justice, using AI as an auxiliary tool in case analysis, but preserving the supremacy of human will and the ultimate responsibility for decisions. The challenge will be to ensure that ultimate responsibility remains with humans, maintaining the supremacy of human will as a fundamental pillar of the justice system. AI should be a tool to improve justice, not to dehumanize it.

No items found.