.jpg)
Regulations on Artificial Intelligence (AI) are developing rapidly around the world, reflecting growing concerns about the technology's ethical, social, economic, and security impacts. Different regions have adopted distinct approaches, influenced by their cultural, political, and economic priorities, in an effort to balance innovation with public protection.
Below is an overview of the main AI regulations worldwide, concluding with Brazil's position on this important issue.
1. European Union
The European Union (EU) is a pioneer in comprehensive AI regulation, with a focus on protecting fundamental rights, security, and democratic values. Its approach is often seen as a model for other jurisdictions due to its comprehensiveness and rigor.
Its primary regulation on the subject is the AI Act, approved in March 2024, which will be implemented gradually over the coming years. Bans will come into force in 6 months, codes of practice in 9 months, governance rules in 12 months, and obligations for high-risk systems in 24 months.
The AI Act employs a risk-based approach to classify AI systems, ensuring that regulatory obligations are proportionate to their potential for harm. These aspects are detailed in the table below:
AI Act classification according to risk
Unacceptable Risk: systems that manipulate human behavior (e.g., voice-activated toys that encourage dangerous behavior), exploit vulnerabilities (e.g., AI that targets mental or physical disabilities), perform government “social scoring,” or use emotion recognition in workplaces or educational institutions. These are strictly prohibited due to their potential to violate fundamental rights.
High Risk: systems that may cause significant harm to health, safety, or fundamental rights. These include AI used in critical infrastructure (e.g., traffic management, power grids), medical devices, education (e.g., exam assessment), employment (e.g., resume screening), law enforcement (e.g., criminal risk assessment), migration management, and remote biometric identification systems (with limited exceptions for law enforcement in serious cases). These systems are subject to strict obligations before and after being placed on the market.
Limited Risk: systems with specific transparency obligations, ensuring users are aware they are interacting with AI. Examples include chatbots, emotion recognition systems, and deepfakes.
Minimal or No Risk: the vast majority of AI systems, such as AI-based games or spam filters. They face no additional obligations but are encouraged to follow voluntary codes of conduct.
For high-risk systems, the requirements are detailed and comprehensive:
Requirements for High-Risk Systems
Robust risk management systems: implementing a continuous lifecycle of risk identification, analysis, assessment, and mitigation, including post-market monitoring.
High-quality data: training, validation, and testing data must be representative, relevant, complete, and free from biases that could lead to discrimination. Detailed documentation on data collection and curation is required.
Activity logging: systems must record events to ensure traceability, auditability, and performance monitoring over time.
Transparency and user information: users must be informed about the system's purpose, its capabilities and limitations, and how it may affect them.
Adequate human oversight: ensuring humans can intervene, correct, or override automated decisions, especially in critical situations.
Accuracy, robustness and cybersecurity: systems must be accurate for their intended purpose, resilient to errors and attacks, and protected against security vulnerabilities.
Pre-market conformity assessment: similar to CE certification for products, high-risk systems must undergo a conformity assessment, which may involve third-party audits.
The law imposes duties of transparency, explainability (the ability to understand how a decision was made), and auditability, especially for high-risk systems. Furthermore, it promotes respect for fundamental rights, non-discrimination, and the mitigation of algorithmic biases.
Regarding personal data, the AI Act complements and strongly aligns with the GDPR (General Data Protection Regulation), which regulates the processing of personal data. Any AI system that uses personal data must comply with both regulations, ensuring principles such as data minimization, purpose limitation, and security.
Serious misconduct, such as the use of prohibited AI systems or data breaches, can result in fines of up to €35 million or 7% of the company's annual global revenue, whichever is greater. Violations of high-risk system requirements can result in fines of up to €15 million or 3% of global revenue.
2. United States of America
The US approach to AI is more fragmented and sector-specific, combining federal executive orders, voluntary guidelines, and state or agency-specific laws. The focus is on balancing innovation with security and the protection of civil rights. Thus, the three primary sources of regulation are:
Sources of AI Regulation
Executive Order on AI: issued in October 2023, it is one of the most comprehensive frameworks in the world. It aims to ensure the safety, security, and reliability of AI while promoting innovation and competition. It includes guidelines for AI developers, privacy protection, promoting fairness, and combating fraud.
NIST AI Risk Management Framework (AI RMF): a voluntary guide published by the National Institute of Standards and Technology (NIST) to help organizations manage risks associated with the design, development, deployment, and use of AI.
State and Industry Laws: many states (such as California) and regulatory agencies (such as the FDA for medical AI or the FTC for AI in advertising) have their own guidelines or laws related to AI within their respective jurisdictions.
Regarding the October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, it is important to understand that its goal is to promote the safe and ethical development and use of AI across all federal agencies and to encourage similar practices in the private sector. It encompasses both foundational AI models (large models trained on vast datasets, capable of adapting to diverse tasks) and generative AI. The main points regulated by the order are:
Main Points Regulated by the AI Decree
Requires developers of the most powerful baseline AI systems (those that pose a serious risk to national security, economic security, or public health) to share security test results (including “red teaming” tests to identify vulnerabilities) with the government before public release.
Directs the development of standards for AI testing, including vulnerability assessments, simulated attacks, and risk mitigation to prevent malicious uses (e.g., for creating biological weapons or launching cyberattacks).
Implements measures to protect data privacy, including the development of privacy-by-design techniques and privacy-enhancing technologies (PETs).
Establishes requirements for AI systems that affect national security, public health, and safety, such as the creation of an AI cybersecurity program.
Focuses on promoting equity, protecting against algorithmic discrimination (e.g., in housing and employment), ensuring transparency of AI-generated content (e.g., with digital watermarks), and strengthening accountability, with an emphasis on protecting civil rights.
Strengthens the cybersecurity of AI systems and mandates the use of standards developed by NIST for AI security and risk management. The NIST AI Risk Management Framework (AI RMF), while voluntary, offers a comprehensive guide structured around four functions: Govern, Map, Measure, and Manage.
While the Executive Order does not establish direct penalties for the private sector, non-compliance could lead to contractual limitations with the US government, enforcement actions by industry regulators (e.g., the FTC for unfair or discriminatory practices, the FDA for AI in medical devices), or civil litigation.
3. China
China has adopted a proactive approach to AI regulation, focusing on cybersecurity, content control, social stability, and the promotion of innovation. Its regulations are characterized by strong state control and strict accountability for service providers. Below are the main AI regulations in China:
Major AI Regulations in China
Provisions on the Administration of Deep Synthesis Internet Information Services (2023) (Deepfake Regulations).
Interim Measures for the Management of Generative Artificial Intelligence Services (2023) (Generative AI Regulations).
Internet Information Service Algorithmic Recommendation Management Provisions (2022)
The main pillars of this approach are detailed below:
Main Pilllars of Regulations Standards in China
Covering a wide range of AI applications, from the use of algorithms for content recommendation (e.g., news feeds, video platforms) to the generation of synthetic content (e.g., deepfakes, synthetic audio) and generative AI services. The rules apply broadly to AI service providers operating in China or serving Chinese users.
AI-generated content must be aligned with “core socialist values” and must not infringe on rights, defame, spread false information, or generate illegal content (e.g., content that subverts state power).
Requires clear identification of AI-generated content (for instance, through visible watermarks or metadata) to prevent misinformation.
Ensuring the authenticity and legality of the data used to train AI models.
Security audits and algorithm assessments for generative AI services prior to their public deployment.
AI service providers are held accountable for the content generated by their systems and must ensure data and content security. Users must have the option to disable recommendation algorithms and the right to explanations for automated decisions.
Imposes strict requirements for personal data processing and information security, aligned with China's Cybersecurity Law (CSL) and Personal Information Protection Law (PIPL). This includes data localization requirements and security assessments for international data transfers.
These regulations establish administrative sanctions (e.g., warnings, suspension of services), substantial fines, and, in serious cases, criminal liability for individuals and companies.
4. United Kingdom
The UK has adopted a more pragmatic and innovation-friendly approach, seeking to avoid stifling technological development with comprehensive, centralized legislation. Instead, it opts to leverage existing regulatory frameworks. The government has thus proposed a set of principles for existing regulatory agencies to apply to AI within their respective sectors, rather than creating an entirely new law.
These principles include safety, transparency, explainability (the concept that a machine learning model and its output can be explained in a way that “makes sense” to a human), fairness, and accountability. They were outlined in the AI White Paper, a document published in 2023 that details the UK's strategy for AI, focusing on how existing regulators can apply their mandates to AI technology.
The main points raised by this framework are listed below:
Main Points raised by the AI White Paper
Delegates AI regulation to existing sectoral regulators (e.g., for healthcare, finance, competition, human rights) rather than creating a new, dedicated agency or a comprehensive law. The goal is for these regulators to apply general AI principles within their respective domains.
It established five non-statutory principles that regulators should apply to the use of AI in their sectors:
– Security, protection, and robustness: AI must be safe, reliable, and resistant to failure and manipulation.
– Transparency and explainability: AI decisions must be understandable and auditable.
– Fairness: AI must be used fairly, without discrimination, and with bias mitigation.
– Accountability and governance: there must be clarity about who is responsible for AI decisions and outcomes.
– Contestability and redress: individuals must have the means to contest AI decisions and seek redress for harm.
Leaves data protection issues to the UK GDPR (the British version of the EU's data protection law, adapted after the UK left the EU).
Regarding penalties, sanctions will be those already existing within the various sectoral regulatory regimes. For instance, the Information Commissioner's Office (ICO) would impose fines for data breaches, while the Financial Conduct Authority (FCA) would impose sanctions for misconduct in the financial sector.
5. Brazil
Brazil’s Congress has been closely following global discussions on AI and has sought to develop its own regulatory framework. The aim is to position the country as a leader in AI within Latin America while balancing innovation and security.
Although several bills are under consideration, the primary one is Bill #2338/2023 (originating from Bill #21/2020), which proposes a legal framework for the development and use of artificial intelligence in Brazil. This bill is currently under review in the Senate and aims to create a comprehensive legal framework, drawing inspiration from international models, especially the EU's AI Act. The main points of the Bill #2338/2023 are:
Main Points of Bill #2338/2023
Risk-based approach: similar to the EU's AI Act, it classifies AI systems into risk levels (excessive, high, medium, and low), with obligations proportionate to the risk. An “excessive risk” system would be prohibited, while a “high risk” system would require rigorous algorithmic impact assessments and compliance measures.
Data Subject Rights: guarantees rights such as the right to an explanation for automated decisions (especially those that significantly affect the individual), the right to non-discrimination, and the right to human review. These rights complement those already provided by Brazil's General Data Protection Law (LGPD).
Ethical Principles: establishes principles such as transparency, explainability, security, non-discrimination, privacy, governance, and accountability to guide the entire AI lifecycle.
Governance and Accountability: proposes mechanisms for oversight and accountability for harm caused by AI systems, including the creation of a new regulatory body or the assignment of these powers to existing agencies.
Compliance Assessment: for high-risk systems, requires a prior compliance assessment, including bias testing, security audits, and the recording of AI activities.
Data Requirements: focuses on the quality, representativeness, and legality of the data used for training, validation, and testing, aiming to mitigate biases and ensure privacy.
Regulatory Sandbox: provides for experimental regulatory environments to test AI innovations under supervision, allowing new technologies to be developed in a controlled and safe manner.
The Brazilian government has shown interest in positioning the country as a leader in AI in Latin America, seeking a balance between innovation and security. The Ministry of Science, Technology, and Innovation (MCTI) has promoted discussions and developed the Brazilian Artificial Intelligence Strategy (EBIA), which defines principles and guidelines for AI development, focusing on research, talent training, and application in strategic sectors.
The Brazilian Data Protection Authority (ANPD), responsible for enforcing the General Data Protection Act (LGPD), already operates at the intersection of AI and data protection, monitoring compliance with principles such as purpose limitation, necessity, and security in AI systems that process personal data. Other sectoral agencies (e.g., the Central Bank for finance, the CVM for capital markets, ANATEL for telecommunications) are also beginning to develop specific guidelines for AI in their respective domains.
Companies in Brazil are advised to begin mapping their AI systems to identify associated risk levels (e.g., high risk for HR or credit systems) and to review their data governance and algorithm development practices. Conducting AI risk assessments and creating inventories of AI systems is imperative.
Companies will need to implement robust methodologies to identify, assess, and mitigate risks associated with AI systems, including algorithmic biases that could lead to discrimination or unfair decisions. Rigorous testing and post-deployment monitoring will be essential.
Finally, it will be necessary to establish or strengthen AI governance structures, including internal policies, ethics committees, training for development and operations teams, and processes for auditing and the ongoing monitoring of AI systems.