Artificial intelligence, known as AI, is a set of technologies that enable computers to perform advanced tasks, including, for example, the ability to see, understand, and analyze data, as well as make recommendations based on it. This concept began was introduced in 1950 through an paper published by Alan Turing, which evaluated whether a machine could impersonate a human being during a written interaction1.
Over the decades, artificial intelligence has evolved significantly, mainly with the advancement of computer technology. However, it was only in 2022 that ChatGPT, an AI that aims to simulate human interactions, offering coherent responses based on deep learning2, was launched. This natural language model has proven valuable in several applications, and in 2023, new AI tools were launched introducing new types of creations, such as images and voices.
From a legal standpoint, AI can be used to analyze documents, review contracts, and assist in the preparation of procedural documents, offering benefits such as speed, accuracy, and cost reduction. However, as this is an exceptionally new subject, a number of legal and ethical considerations arise, which require careful analysis, especially with regard to professional secrecy and confidentiality.
Professional secrecy is the attorney's duty to keep the client's information secret, disclosing it only with the client's authorization or when required by legal. Confidentiality, on the other hand, refers to the commitment to handle the client's information with discretion and security, ensuring protection against unauthorized access or disclosure by third parties.
In his book Inteligência Artificial e Direito [Artificial Intelligence and Law], Fabiano Hartmann Peixoto highlights that the use of AI in law raises several ethical questions, since AI algorithms can be influenced by unconscious biases contained in the training data, leading to biased and discriminatory results. Thus, AI has the potential to perpetuate or even amplify existing inequalities in the legal system.
When using AI, especially in open systems – publicly available, internet-based tools in which the terms of service indicate that data is not confidential –, an attorney deciding to use this resource will expose their clients' confidential data to the risk of breach. In this context, this information can be accessed by other AI users or hackers, compromising professional secrecy and data confidentiality.
Additionally, there are three additional AI system options available to attorneys. Some systems allow users to opt out of having their data used for training purposes; however, this is unstable, as the terms of use of the system are subject to change. Another possibility is to hire providers offering access to exclusive AI systems, restricted to authorized individuals. However, even with such dedicated versions, the provider may still use the data, undermining confidentiality. Finally, the safest alternative would be an AI programmed to operate in a closed system, using only the client's data to train the algorithms, providing greater security. However, this option involves high implementation and maintenance costs.
Therefore, it is observed that AI tools applied to law can be a beneficial tool, but they can also pose ethical and legal challenges that attorneys must consider, seeking ways to mitigate any risks that may harm their client's interests and rights.