European Council Approves Legislation on Artificial Intelligence
May 29, 2024
In contrast to Mercosur, which struggles to evolve from an economic bloc to a political and monetary union due to challenges in socio-political integration caused by alternating conservative and progressive governments, the European Union continues to demonstrate efficiency in integrating its member countries. Despite the United Kingdom's departure, the EU remains committed to regulating fundamental issues related to the incorporation of new technologies, which will significantly impact personal, educational, and professional relationships. We are specifically discussing Artificial Intelligence (AI).
Artificial Intelligence (AI) is the use of digital technology to create systems capable of performing tasks that normally require human intelligence. While some AI technologies have been around for a few decades, recent developments in computer technology, the availability of vast data sets, and the proliferation of new software have accelerated progress significantly.
AI is already woven into various aspects of our daily lives. It predicts natural disasters, provides virtual assistance, translates texts automatically, ensures quality control in manufacturing, aids medical diagnoses, and guides vehicle navigation. And that’s not all - new tools are emerging to assist individuals and organizations, generally, with their diverse needs. Among these tolls, we have ChatGPT, Copilot, Gemini, and Claude, each introducing fresh features almost daily.
On May 21, 2024, the European Council achieved a significant milestone by approving the Artificial Intelligence Act. This groundbreaking legislation harmonizes rules related to AI across the European Union’s Member States. However, it was no easy feat. Some European Union countries, particularly Germany and France, home to several AI start-ups, have advocated self-regulation over government-imposed restrictions. Their concern is that overly stringent regulations could hinder Europe's ability to compete with Chinese and American companies in the rapidly advancing technology sector.
Timeline of the Artificial Intelligence Act
In fact, discussions about AI within the European Council began as early as 2020. However, it was in 2021 that the Artificial Intelligence Act came into existence. This legislation categorized AI technologies based on risk levels, ranging from “unacceptable” (which would lead to a ban) to high-risk, medium-risk, and low-risk. The European Union has explicitly prohibited AI systems that engage in cognitive-behavioral manipulation, social scoring, profile-based predictive policing, and those that use biometric data to categorize individuals based on factors such as race, religion, or sexual orientation. These systems are considered too risky.
Below you’ll find the timeline of the Artificial Intelligence Act:
Risk Classification of the Artificial Intelligence Act
Below are some examples based on the risk classification:
RISK CLASSIFICATION
EXAMPLES
Unacceptable
Social scoring, facial recognition in certain
circumstances
High-risk
Use in transport, for scheduling exams, recruitment,
granting loans in certain circumstances
Medium risk
Chatbots (conversational robots)
Low risk
Video games, anti-spam filters
It is important to note that the Artificial Intelligence Act will not oversee or impact AI systems deemed low risk. Medium-risk systems, identified as having systemic but limited risks, will face minimal transparency requirements, which include the need to reveal if content was generated by AI, allowing users to make informed decisions about subsequent use. Systems designed for direct human interaction must be clearly labeled, unless it is evident to users that they are engaging with AI.
To grasp the concepts of ‘unacceptable’ and ‘high-risk’ more clearly, please refer to the detailed explanation that follows.
RISK CLASSIFICATION
DESCRIPTION
Unacceptable
1. The following AI practices shall be prohibited:
(a) the placing on the market, the putting into service or
the use of an AI system that deploys subliminal
techniques beyond a person’s consciousness or
purposefully manipulative or deceptive techniques, with the
objective, or the effect of materially distorting the
behavior of a person or a group of persons by appreciably
impairing their ability to make an informed decision,
thereby causing them to take a decision that they would not
have otherwise taken in a manner that causes or is
reasonably likely to cause that person, another person or
group of persons significant harm;
(b) the placing on the market, the putting into service or
the use of an AI system that exploits any of the
vulnerabilities of a natural person or a specific group
of persons due to their age, disability or a specific social
or economic situation, with the objective, or the effect, of
materially distorting the behavior of that person or a
person belonging to that group in a manner that causes or
is reasonably likely to cause that person or another
person significant harm;
(c) the placing on the market, the putting into service or
the use of AI systems for the evaluation or
classification of natural persons or groups of persons
over a certain period of time based on their social behavior
or known, inferred or predicted personal or personality
characteristics, with the social score leading to either
or both of the following:
(i) detrimental or unfavorable treatment of certain natural
persons or groups of persons in social contexts that are
unrelated to the contexts in which the data was
originally generated or collected;
(ii) detrimental or unfavorable treatment of certain natural
persons or groups of persons that is unjustified or
disproportionate to their social behavior or its gravity;
(d) the placing on the market, the putting into service for
this specific purpose, or the use of an AI system for
making risk assessments of natural persons in order to
assess or predict the risk of a natural person committing
a criminal offense, based solely on the profiling of a
natural person or on assessing their personality traits
and characteristics; this prohibition shall not apply to
AI systems used to support the human assessment of the
involvement of a person in a criminal activity, which is
already based on objective and verifiable facts directly
linked to a criminal activity;
(e) the placing on the market, the putting into service for
this specific purpose, or the use of AI systems that
create or expand facial recognition databases through the
untargeted scraping of facial images from the internet or
CCTV footage;
(f) the placing on the market, the putting into service for
this specific purpose, or the use of AI systems to infer
emotions of a natural person in the areas of workplace
and education institutions, except where the use of the
AI system is intended to be put in place or into the market
for medical or safety reasons;
(g) the placing on the market, the putting into service for
this specific purpose, or the use of biometric
categorization systems that categorize individually
natural persons based on their biometric data to deduce or
infer their race, political opinions, trade union
membership, religious or philosophical beliefs, sex life
or sexual orientation; this prohibition does not cover
any labeling or filtering of lawfully acquired biometric
datasets, such as images, based on biometric data or
categorizing of biometric data in the area of law
enforcement;
(h) the use of ‘real-time’ remote biometric identification
systems in publicly accessible spaces for the purposes of
law enforcement, unless and in so far as such use is
strictly necessary for one of the following objectives:
(i) the targeted search for specific victims of abduction,
trafficking in human beings or sexual exploitation of
human beings, as well as the search for missing persons;
(ii) the prevention of a specific, substantial and imminent
threat to the life or physical safety of natural persons
or a genuine and present or genuine and foreseeable
threat of a terrorist attack;
(iii) the localization or identification of a person
suspected of having committed a criminal offense, for the
purpose of conducting a criminal investigation or
prosecution or executing a criminal penalty for offenses
referred to in
Annex II
and punishable in the Member State concerned by a
custodial sentence or a detention order for a maximum
period of at least four years. Point h) of the first
subparagraph is without prejudice to Article 9 of
Regulation (EU) 2016/679 for the processing of biometric
data for purposes other than law enforcement.
2. The use of ‘real-time’ remote biometric identification
systems in publicly accessible spaces for the purposes of
law enforcement for any of the objectives referred to in
paragraph 1, first subparagraph, point (h), shall be
deployed for the purposes set out in that point only to
confirm the identity of the specifically targeted
individual, and it shall take into account the following
elements:
(a) the nature of the situation giving rise to the possible
use, in particular the seriousness, probability and scale
of the harm that would be caused if the system were not
used;
(b) the consequences of the use of the system for the rights
and freedoms of all persons concerned, in particular the
seriousness, probability and scale of those consequences.
In addition, the use of ‘real-time’ remote biometric
identification systems in publicly accessible spaces for the
purposes of law enforcement for any of the objectives
referred to in paragraph 1, first subparagraph, point
(h), of this Article shall comply with necessary and
proportionate safeguards and conditions in relation to the
use in accordance with the national law authorizing the
use thereof, in particular as regards the temporal,
geographic and personal limitations. The use of the
‘real-time’ remote biometric identification system in
publicly accessible spaces shall be authorized only if
the law enforcement authority has completed a fundamental
rights impact assessment as provided for in
Article 27
and has registered the system in the EU database
according to
Article 49
. However, in duly justified cases of urgency, the use of
such systems may be commenced without the registration in
the EU database, provided that such registration is
completed without undue delay.
3. For the purposes of paragraph 1, first subparagraph,
point (h) and paragraph 2, each use for the purposes of
law enforcement of a ‘real-time’ remote biometric
identification system in publicly accessible spaces shall be
subject to a prior authorization granted by a judicial
authority or an independent administrative authority
whose decision is binding of the Member State in which
the use is to take place, issued upon a reasoned request and
in accordance with the detailed rules of national law
referred to in paragraph 5. However, in a duly justified
situation of urgency, the use of such system may be
commenced without an authorization provided that such
authorization is requested without undue delay, at the
latest within 24 hours. If such authorization is
rejected, the use shall be stopped with immediate effect
and all the data, as well as the results and outputs of that
use shall be immediately discarded and deleted. The
competent judicial authority or an independent
administrative authority whose decision is binding shall
grant the authorization only where it is satisfied, on the
basis of objective evidence or clear indications presented
to it, that the use of the ‘real-time’ remote biometric
identification system concerned is necessary for, and
proportionate to, achieving one of the objectives
specified in paragraph 1, first subparagraph, point (h), as
identified in the request and, in particular, remains
limited to what is strictly necessary concerning the
period of time as well as the geographic and personal
scope. In deciding on the request, that authority shall
take into account the elements referred to in paragraph
2. No decision that produces an adverse legal effect on a
person may be taken based solely on the output of the
‘real-time’ remote biometric identification system.
4. Without prejudice to paragraph 3, each use of a
‘real-time’ remote biometric identification system in
publicly accessible spaces for law enforcement purposes
shall be notified to the relevant market surveillance
authority and the national data protection authority in
accordance with the national rules referred to in
paragraph 5. The notification shall, as a minimum,
contain the information specified under paragraph 6 and
shall not include sensitive operational data.
5. A Member State may decide to provide for the possibility
to fully or partially authorize the use of ‘real-time’
remote biometric identification systems in publicly
accessible spaces for the purposes of law enforcement
within the limits and under the conditions listed in
paragraph 1, first subparagraph, point (h), and
paragraphs 2 and 3. Member States concerned shall lay down
in their national law the necessary detailed rules for
the request, issuance and exercise of, as well as
supervision and reporting relating to, the authorizations
referred to in paragraph 3. Those rules shall also specify
in respect of which of the objectives listed in paragraph
1, first subparagraph, point (h), including which of the
criminal offenses referred to in point (h)(iii) thereof,
the competent authorities may be authorized to use those
systems for the purposes of law enforcement. Member
States shall notify those rules to the Commission at the
latest 30 days following the adoption thereof. Member
States may introduce, in accordance with Union law, more
restrictive laws on the use of remote biometric
identification systems.
6. National market surveillance authorities and the national
data protection authorities of Member States that have
been notified of the use of ‘real-time’ remote biometric
identification systems in publicly accessible spaces for
law enforcement purposes pursuant to paragraph 4 shall
submit to the Commission annual reports on such use.
For that purpose, the Commission shall provide Member States
and national market surveillance and data protection
authorities with a template, including information on the
number of the decisions taken by competent judicial
authorities or an independent administrative authority whose
decision is binding upon requests for authorizations in
accordance with paragraph 3 and their result.
7. The Commission shall publish annual reports on the use of
real-time remote biometric identification systems in
publicly accessible spaces for law enforcement purposes,
based on aggregated data in Member States on the basis of
the annual reports referred to in paragraph 6. Those annual
reports shall not include sensitive operational data of
the related law enforcement activities.
High-risk
1. Biometrics, in so far as their use is
permitted under relevant Union or national law: (a)
remote biometric identification systems. This shall not
include AI systems intended to be used for biometric
verification the sole purpose of which is to confirm that a
specific natural person is the person he or she claims to
be;
(b) AI systems intended to be used for biometric
categorization, according to sensitive or protected
attributes or characteristics based on the inference of
those attributes or characteristics;
(c) AI systems intended to be used for emotion recognition.
2. Critical infrastructure: AI systems
intended to be used as safety components in the management
and operation of critical digital infrastructure, road
traffic, or in the supply of water, gas, heating or
electricity.
3. Education and vocational training:
(a) AI systems intended to be used to determine access or
admission or to assign natural persons to educational and
vocational training institutions at all levels;
(b) AI systems intended to be used to evaluate learning
outcomes, including when those outcomes are used to steer
the learning process of natural persons in educational
and vocational training institutions at all levels;
(c) AI systems intended to be used for the purpose of
assessing the appropriate level of education that an
individual will receive or will be able to access, in the
context of or within educational and vocational training
institutions at all levels;
(d) AI systems intended to be used for monitoring and
detecting prohibited behavior of students during tests in
the context of or within educational and vocational
training institutions at all levels.
4.
Employment, workers management and access to
self-employment
:
(a) AI systems intended to be used for the recruitment or
selection of natural persons, in particular to place
targeted job advertisements, to analyze and filter job
applications, and to evaluate candidates;
(b) AI systems intended to be used to make decisions
affecting terms of work-related relationships, the
promotion or termination of work-related contractual
relationships, to allocate tasks based on individual
behavior or personal traits or characteristics or to
monitor and evaluate the performance and behavior of
persons in such relationships.
5.
Access to and enjoyment of essential private services
and essential public services and benefits
:
(a) AI systems intended to be used by public authorities or
on behalf of public authorities to evaluate the
eligibility of natural persons for essential public
assistance benefits and services, including healthcare
services, as well as to grant, reduce, revoke, or reclaim
such benefits and services;
(b) AI systems intended to be used to evaluate the
creditworthiness of natural persons or establish their
credit score, with the exception of AI systems used for
the purpose of detecting financial fraud;
(c) AI systems intended to be used for risk assessment and
pricing in relation to natural persons in the case of
life and health insurance;
(d) AI systems intended to evaluate and classify emergency
calls by natural persons or to be used to dispatch, or to
establish priority in the dispatching of, emergency first
response services, including by police, firefighters and
medical aid, as well as of emergency healthcare patient
triage systems.
6.
Law enforcement, in so far as their use is permitted
under relevant Union or national law:
(a) AI systems intended to be used by or on behalf of law
enforcement authorities, or by Union institutions,
bodies, offices or agencies in support of law enforcement
authorities or on their behalf to assess the risk of a
natural person becoming the victim of criminal offenses;
(b) AI systems intended to be used by or on behalf of law
enforcement authorities or by Union institutions, bodies,
offices or agencies in support of law enforcement
authorities as polygraphs or similar tools; EN United in
diversity
(c) AI systems intended to be used by or on behalf of law
enforcement authorities, or by Union institutions,
bodies, offices or agencies, in support of law
enforcement authorities to evaluate the reliability of
evidence in the course of the investigation or prosecution
of criminal offenses;
(d) AI systems intended to be used by law enforcement
authorities or on their behalf or by Union institutions,
bodies, offices or agencies in support of law enforcement
authorities for assessing the risk of a natural person
offending or re-offending not solely on the basis of the
profiling of natural persons as referred to in Article
3(4) of Directive (EU) 2016/680, or to assess personality
traits and characteristics or past criminal behavior of
natural persons or groups;
(e) AI systems intended to be used by or on behalf of law
enforcement authorities or by Union institutions, bodies,
offices or agencies in support of law enforcement
authorities for the profiling of natural persons as
referred to in Article 3(4) of Directive (EU) 2016/680 in
the course of the detection, investigation or prosecution
of criminal offenses.
7.
Migration, asylum and border control management, in
so far as their use is permitted under relevant Union or
national law:
:
(a) AI systems intended to be used by or on behalf of
competent public authorities or by Union institutions,
bodies, offices or agencies as polygraphs or similar
tools;
(b) AI systems intended to be used by or on behalf of
competent public authorities or by Union institutions,
bodies, offices or agencies to assess a risk, including a
security risk, a risk of irregular migration, or a health
risk, posed by a natural person who intends to enter or who
has entered into the territory of a Member State;
(c) AI systems intended to be used by or on behalf of
competent public authorities or by Union institutions,
bodies, offices or agencies to assist competent public
authorities for the examination of applications for asylum,
visa or residence permits and for associated complaints with
regard to the eligibility of the natural persons applying
for a status, including related assessments of the
reliability of evidence;
(d) AI systems intended to be used by or on behalf of
competent public authorities, or by Union institutions,
bodies, offices or agencies, in the context of migration,
asylum or border control management, for the purpose of
detecting, recognizing or identifying natural persons, with
the exception of the verification of travel documents.
8.
Administration of justice and democratic processes:
(a) AI systems intended to be used by a judicial authority
or on their behalf to assist a judicial authority in
researching and interpreting facts and the law and in
applying the law to a concrete set of facts, or to be used
in a similar way in alternative dispute resolution;
(b) AI systems intended to be used for influencing the
outcome of an election or referendum or the voting
behavior of natural persons in the exercise of their vote
in elections or referenda. This does not include AI systems
to the output of which natural persons are not directly
exposed, such as tools used to organize, optimize or
structure political campaigns from an administrative or
logistical point of view.
The Artificial Intelligence Act also covers the use of general-purpose AI (GPAI) models. GPAI models free from systemic risks will only need to meet certain basic requirements, such as those related to transparency. However, models identified with systemic risks must adhere to stricter regulations.
For developers and implementers, if their technology is categorized as unacceptable, they must halt its development immediately. On the other hand, those dealing with high-risk technologies must be ready to fulfill the Artificial Intelligence Act’s stringent criteria:
1. Enroll in the European Union's centralized database.
2. Establish a quality management system that aligns with the Act.
3. Keep comprehensive documentation and records.
4. Undergo necessary compliance assessments.
5. Abie by the limitations on employing high-risk AI systems.
6. Maintain ongoing regulatory compliance and be prepared to prove it when asked.
CNBC’s thorough report on AI regulation highlights governmental concerns about deepfakes — AI-generated falsifications like photos and videos — potentially disrupting this year’s key global elections.
Establishing Artificial Intelligence Governance
The European Union Artificial Intelligence Office has been established recently to play a pivotal role in enacting the Artificial Intelligence Act. It will support Member States' governance bodies, ensuring the rules for general-purpose AI models are applied effectively. The Act empowers the Office to evaluate these AI models, demand information and corrective actions from providers, and impose penalties if necessary. The Office is also tasked with fostering a trusted AI ecosystem, which is expected to yield social and economic advantages. This initiative aims to position the EU as a strategic, unified, and influential leader in AI globally.
Supporting the Office’s regulatory efforts is a scientific panel of independent experts.
Additionally, the Artificial Intelligence Council, comprising Member State representatives, has been formed to provide guidance and support to the European Commission and Member States for a consistent and robust application of the Act.
Finally, a consultative forum has been set up to allow stakeholders, including individuals and organizations, to offer technical advice to the Artificial Intelligence Council and the European Commission.
Penalties for Non-Compliance with the Artificial Intelligence Act
The Artificial Intelligence Act stipulates fines for non-compliance of up to 7% of the offending company’s global annual turnover from the previous financial year, or a fixed sum of €35,000,000.00, whichever is higher. SMEs and startups will face proportionate fines or the fixed sum, but always the lesser amount.
Moreover, providing incorrect, incomplete, or misleading information to the notified bodies or competent national authorities in response to a request is subject to administrative fines of up to 7,500,000.00 EUR, or if the offender is a company, up to 1% of its total global annual turnover for the previous financial year, whichever is higher.
For the assessment of the penalty, the following parameters must be considered:
1. The nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them.
2. Whether administrative fines have already been applied by other market surveillance authorities to the same operator for the same infringement.
3. Whether administrative fines have already been applied by other authorities to the same operator for infringements of other Union or national law, when such infringements result from the same activity or omission constituting a relevant infringement of this Regulation.
4. The size, the annual turnover and market share of the operator committing the infringement.
5. Any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement.
6. The degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate the possible adverse effects of the infringement.
7. The degree of responsibility of the operator taking into account the technical and organizational measures implemented by it.
8. The manner in which the infringement became known to the national competent authorities, in particular whether, and if so to what extent, the operator notified the infringement.
9. The intentional or negligent character of the infringement.
10. Any action taken by the operator to mitigate the harm suffered by the affected persons.
In contrast to Mercosur, which struggles to evolve from an economic bloc to a political and monetary union due to challenges in socio-political integration caused by alternating conservative and progressive governments, the European Union continues to demonstrate efficiency in integrating its member countries. Despite the United Kingdom's departure, the EU remains committed to regulating fundamental issues related to the incorporation of new technologies, which will significantly impact personal, educational, and professional relationships. We are specifically discussing Artificial Intelligence (AI).
Artificial Intelligence (AI) is the use of digital technology to create systems capable of performing tasks that normally require human intelligence. While some AI technologies have been around for a few decades, recent developments in computer technology, the availability of vast data sets, and the proliferation of new software have accelerated progress significantly.
AI is already woven into various aspects of our daily lives. It predicts natural disasters, provides virtual assistance, translates texts automatically, ensures quality control in manufacturing, aids medical diagnoses, and guides vehicle navigation. And that’s not all - new tools are emerging to assist individuals and organizations, generally, with their diverse needs. Among these tolls, we have ChatGPT, Copilot, Gemini, and Claude, each introducing fresh features almost daily.
On May 21, 2024, the European Council achieved a significant milestone by approving the Artificial Intelligence Act. This groundbreaking legislation harmonizes rules related to AI across the European Union’s Member States. However, it was no easy feat. Some European Union countries, particularly Germany and France, home to several AI start-ups, have advocated self-regulation over government-imposed restrictions. Their concern is that overly stringent regulations could hinder Europe's ability to compete with Chinese and American companies in the rapidly advancing technology sector.
Timeline of the Artificial Intelligence Act
In fact, discussions about AI within the European Council began as early as 2020. However, it was in 2021 that the Artificial Intelligence Act came into existence. This legislation categorized AI technologies based on risk levels, ranging from “unacceptable” (which would lead to a ban) to high-risk, medium-risk, and low-risk. The European Union has explicitly prohibited AI systems that engage in cognitive-behavioral manipulation, social scoring, profile-based predictive policing, and those that use biometric data to categorize individuals based on factors such as race, religion, or sexual orientation. These systems are considered too risky.
Below you’ll find the timeline of the Artificial Intelligence Act:
Risk Classification of the Artificial Intelligence Act
Below are some examples based on the risk classification:
RISK CLASSIFICATION
EXAMPLES
Unacceptable
Social scoring, facial recognition in certain
circumstances
High-risk
Use in transport, for scheduling exams, recruitment,
granting loans in certain circumstances
Medium risk
Chatbots (conversational robots)
Low risk
Video games, anti-spam filters
It is important to note that the Artificial Intelligence Act will not oversee or impact AI systems deemed low risk. Medium-risk systems, identified as having systemic but limited risks, will face minimal transparency requirements, which include the need to reveal if content was generated by AI, allowing users to make informed decisions about subsequent use. Systems designed for direct human interaction must be clearly labeled, unless it is evident to users that they are engaging with AI.
To grasp the concepts of ‘unacceptable’ and ‘high-risk’ more clearly, please refer to the detailed explanation that follows.
RISK CLASSIFICATION
DESCRIPTION
Unacceptable
1. The following AI practices shall be prohibited:
(a) the placing on the market, the putting into service or
the use of an AI system that deploys subliminal
techniques beyond a person’s consciousness or
purposefully manipulative or deceptive techniques, with the
objective, or the effect of materially distorting the
behavior of a person or a group of persons by appreciably
impairing their ability to make an informed decision,
thereby causing them to take a decision that they would not
have otherwise taken in a manner that causes or is
reasonably likely to cause that person, another person or
group of persons significant harm;
(b) the placing on the market, the putting into service or
the use of an AI system that exploits any of the
vulnerabilities of a natural person or a specific group
of persons due to their age, disability or a specific social
or economic situation, with the objective, or the effect, of
materially distorting the behavior of that person or a
person belonging to that group in a manner that causes or
is reasonably likely to cause that person or another
person significant harm;
(c) the placing on the market, the putting into service or
the use of AI systems for the evaluation or
classification of natural persons or groups of persons
over a certain period of time based on their social behavior
or known, inferred or predicted personal or personality
characteristics, with the social score leading to either
or both of the following:
(i) detrimental or unfavorable treatment of certain natural
persons or groups of persons in social contexts that are
unrelated to the contexts in which the data was
originally generated or collected;
(ii) detrimental or unfavorable treatment of certain natural
persons or groups of persons that is unjustified or
disproportionate to their social behavior or its gravity;
(d) the placing on the market, the putting into service for
this specific purpose, or the use of an AI system for
making risk assessments of natural persons in order to
assess or predict the risk of a natural person committing
a criminal offense, based solely on the profiling of a
natural person or on assessing their personality traits
and characteristics; this prohibition shall not apply to
AI systems used to support the human assessment of the
involvement of a person in a criminal activity, which is
already based on objective and verifiable facts directly
linked to a criminal activity;
(e) the placing on the market, the putting into service for
this specific purpose, or the use of AI systems that
create or expand facial recognition databases through the
untargeted scraping of facial images from the internet or
CCTV footage;
(f) the placing on the market, the putting into service for
this specific purpose, or the use of AI systems to infer
emotions of a natural person in the areas of workplace
and education institutions, except where the use of the
AI system is intended to be put in place or into the market
for medical or safety reasons;
(g) the placing on the market, the putting into service for
this specific purpose, or the use of biometric
categorization systems that categorize individually
natural persons based on their biometric data to deduce or
infer their race, political opinions, trade union
membership, religious or philosophical beliefs, sex life
or sexual orientation; this prohibition does not cover
any labeling or filtering of lawfully acquired biometric
datasets, such as images, based on biometric data or
categorizing of biometric data in the area of law
enforcement;
(h) the use of ‘real-time’ remote biometric identification
systems in publicly accessible spaces for the purposes of
law enforcement, unless and in so far as such use is
strictly necessary for one of the following objectives:
(i) the targeted search for specific victims of abduction,
trafficking in human beings or sexual exploitation of
human beings, as well as the search for missing persons;
(ii) the prevention of a specific, substantial and imminent
threat to the life or physical safety of natural persons
or a genuine and present or genuine and foreseeable
threat of a terrorist attack;
(iii) the localization or identification of a person
suspected of having committed a criminal offense, for the
purpose of conducting a criminal investigation or
prosecution or executing a criminal penalty for offenses
referred to in
Annex II
and punishable in the Member State concerned by a
custodial sentence or a detention order for a maximum
period of at least four years. Point h) of the first
subparagraph is without prejudice to Article 9 of
Regulation (EU) 2016/679 for the processing of biometric
data for purposes other than law enforcement.
2. The use of ‘real-time’ remote biometric identification
systems in publicly accessible spaces for the purposes of
law enforcement for any of the objectives referred to in
paragraph 1, first subparagraph, point (h), shall be
deployed for the purposes set out in that point only to
confirm the identity of the specifically targeted
individual, and it shall take into account the following
elements:
(a) the nature of the situation giving rise to the possible
use, in particular the seriousness, probability and scale
of the harm that would be caused if the system were not
used;
(b) the consequences of the use of the system for the rights
and freedoms of all persons concerned, in particular the
seriousness, probability and scale of those consequences.
In addition, the use of ‘real-time’ remote biometric
identification systems in publicly accessible spaces for the
purposes of law enforcement for any of the objectives
referred to in paragraph 1, first subparagraph, point
(h), of this Article shall comply with necessary and
proportionate safeguards and conditions in relation to the
use in accordance with the national law authorizing the
use thereof, in particular as regards the temporal,
geographic and personal limitations. The use of the
‘real-time’ remote biometric identification system in
publicly accessible spaces shall be authorized only if
the law enforcement authority has completed a fundamental
rights impact assessment as provided for in
Article 27
and has registered the system in the EU database
according to
Article 49
. However, in duly justified cases of urgency, the use of
such systems may be commenced without the registration in
the EU database, provided that such registration is
completed without undue delay.
3. For the purposes of paragraph 1, first subparagraph,
point (h) and paragraph 2, each use for the purposes of
law enforcement of a ‘real-time’ remote biometric
identification system in publicly accessible spaces shall be
subject to a prior authorization granted by a judicial
authority or an independent administrative authority
whose decision is binding of the Member State in which
the use is to take place, issued upon a reasoned request and
in accordance with the detailed rules of national law
referred to in paragraph 5. However, in a duly justified
situation of urgency, the use of such system may be
commenced without an authorization provided that such
authorization is requested without undue delay, at the
latest within 24 hours. If such authorization is
rejected, the use shall be stopped with immediate effect
and all the data, as well as the results and outputs of that
use shall be immediately discarded and deleted. The
competent judicial authority or an independent
administrative authority whose decision is binding shall
grant the authorization only where it is satisfied, on the
basis of objective evidence or clear indications presented
to it, that the use of the ‘real-time’ remote biometric
identification system concerned is necessary for, and
proportionate to, achieving one of the objectives
specified in paragraph 1, first subparagraph, point (h), as
identified in the request and, in particular, remains
limited to what is strictly necessary concerning the
period of time as well as the geographic and personal
scope. In deciding on the request, that authority shall
take into account the elements referred to in paragraph
2. No decision that produces an adverse legal effect on a
person may be taken based solely on the output of the
‘real-time’ remote biometric identification system.
4. Without prejudice to paragraph 3, each use of a
‘real-time’ remote biometric identification system in
publicly accessible spaces for law enforcement purposes
shall be notified to the relevant market surveillance
authority and the national data protection authority in
accordance with the national rules referred to in
paragraph 5. The notification shall, as a minimum,
contain the information specified under paragraph 6 and
shall not include sensitive operational data.
5. A Member State may decide to provide for the possibility
to fully or partially authorize the use of ‘real-time’
remote biometric identification systems in publicly
accessible spaces for the purposes of law enforcement
within the limits and under the conditions listed in
paragraph 1, first subparagraph, point (h), and
paragraphs 2 and 3. Member States concerned shall lay down
in their national law the necessary detailed rules for
the request, issuance and exercise of, as well as
supervision and reporting relating to, the authorizations
referred to in paragraph 3. Those rules shall also specify
in respect of which of the objectives listed in paragraph
1, first subparagraph, point (h), including which of the
criminal offenses referred to in point (h)(iii) thereof,
the competent authorities may be authorized to use those
systems for the purposes of law enforcement. Member
States shall notify those rules to the Commission at the
latest 30 days following the adoption thereof. Member
States may introduce, in accordance with Union law, more
restrictive laws on the use of remote biometric
identification systems.
6. National market surveillance authorities and the national
data protection authorities of Member States that have
been notified of the use of ‘real-time’ remote biometric
identification systems in publicly accessible spaces for
law enforcement purposes pursuant to paragraph 4 shall
submit to the Commission annual reports on such use.
For that purpose, the Commission shall provide Member States
and national market surveillance and data protection
authorities with a template, including information on the
number of the decisions taken by competent judicial
authorities or an independent administrative authority whose
decision is binding upon requests for authorizations in
accordance with paragraph 3 and their result.
7. The Commission shall publish annual reports on the use of
real-time remote biometric identification systems in
publicly accessible spaces for law enforcement purposes,
based on aggregated data in Member States on the basis of
the annual reports referred to in paragraph 6. Those annual
reports shall not include sensitive operational data of
the related law enforcement activities.
High-risk
1. Biometrics, in so far as their use is
permitted under relevant Union or national law: (a)
remote biometric identification systems. This shall not
include AI systems intended to be used for biometric
verification the sole purpose of which is to confirm that a
specific natural person is the person he or she claims to
be;
(b) AI systems intended to be used for biometric
categorization, according to sensitive or protected
attributes or characteristics based on the inference of
those attributes or characteristics;
(c) AI systems intended to be used for emotion recognition.
2. Critical infrastructure: AI systems
intended to be used as safety components in the management
and operation of critical digital infrastructure, road
traffic, or in the supply of water, gas, heating or
electricity.
3. Education and vocational training:
(a) AI systems intended to be used to determine access or
admission or to assign natural persons to educational and
vocational training institutions at all levels;
(b) AI systems intended to be used to evaluate learning
outcomes, including when those outcomes are used to steer
the learning process of natural persons in educational
and vocational training institutions at all levels;
(c) AI systems intended to be used for the purpose of
assessing the appropriate level of education that an
individual will receive or will be able to access, in the
context of or within educational and vocational training
institutions at all levels;
(d) AI systems intended to be used for monitoring and
detecting prohibited behavior of students during tests in
the context of or within educational and vocational
training institutions at all levels.
4.
Employment, workers management and access to
self-employment
:
(a) AI systems intended to be used for the recruitment or
selection of natural persons, in particular to place
targeted job advertisements, to analyze and filter job
applications, and to evaluate candidates;
(b) AI systems intended to be used to make decisions
affecting terms of work-related relationships, the
promotion or termination of work-related contractual
relationships, to allocate tasks based on individual
behavior or personal traits or characteristics or to
monitor and evaluate the performance and behavior of
persons in such relationships.
5.
Access to and enjoyment of essential private services
and essential public services and benefits
:
(a) AI systems intended to be used by public authorities or
on behalf of public authorities to evaluate the
eligibility of natural persons for essential public
assistance benefits and services, including healthcare
services, as well as to grant, reduce, revoke, or reclaim
such benefits and services;
(b) AI systems intended to be used to evaluate the
creditworthiness of natural persons or establish their
credit score, with the exception of AI systems used for
the purpose of detecting financial fraud;
(c) AI systems intended to be used for risk assessment and
pricing in relation to natural persons in the case of
life and health insurance;
(d) AI systems intended to evaluate and classify emergency
calls by natural persons or to be used to dispatch, or to
establish priority in the dispatching of, emergency first
response services, including by police, firefighters and
medical aid, as well as of emergency healthcare patient
triage systems.
6.
Law enforcement, in so far as their use is permitted
under relevant Union or national law:
(a) AI systems intended to be used by or on behalf of law
enforcement authorities, or by Union institutions,
bodies, offices or agencies in support of law enforcement
authorities or on their behalf to assess the risk of a
natural person becoming the victim of criminal offenses;
(b) AI systems intended to be used by or on behalf of law
enforcement authorities or by Union institutions, bodies,
offices or agencies in support of law enforcement
authorities as polygraphs or similar tools; EN United in
diversity
(c) AI systems intended to be used by or on behalf of law
enforcement authorities, or by Union institutions,
bodies, offices or agencies, in support of law
enforcement authorities to evaluate the reliability of
evidence in the course of the investigation or prosecution
of criminal offenses;
(d) AI systems intended to be used by law enforcement
authorities or on their behalf or by Union institutions,
bodies, offices or agencies in support of law enforcement
authorities for assessing the risk of a natural person
offending or re-offending not solely on the basis of the
profiling of natural persons as referred to in Article
3(4) of Directive (EU) 2016/680, or to assess personality
traits and characteristics or past criminal behavior of
natural persons or groups;
(e) AI systems intended to be used by or on behalf of law
enforcement authorities or by Union institutions, bodies,
offices or agencies in support of law enforcement
authorities for the profiling of natural persons as
referred to in Article 3(4) of Directive (EU) 2016/680 in
the course of the detection, investigation or prosecution
of criminal offenses.
7.
Migration, asylum and border control management, in
so far as their use is permitted under relevant Union or
national law:
:
(a) AI systems intended to be used by or on behalf of
competent public authorities or by Union institutions,
bodies, offices or agencies as polygraphs or similar
tools;
(b) AI systems intended to be used by or on behalf of
competent public authorities or by Union institutions,
bodies, offices or agencies to assess a risk, including a
security risk, a risk of irregular migration, or a health
risk, posed by a natural person who intends to enter or who
has entered into the territory of a Member State;
(c) AI systems intended to be used by or on behalf of
competent public authorities or by Union institutions,
bodies, offices or agencies to assist competent public
authorities for the examination of applications for asylum,
visa or residence permits and for associated complaints with
regard to the eligibility of the natural persons applying
for a status, including related assessments of the
reliability of evidence;
(d) AI systems intended to be used by or on behalf of
competent public authorities, or by Union institutions,
bodies, offices or agencies, in the context of migration,
asylum or border control management, for the purpose of
detecting, recognizing or identifying natural persons, with
the exception of the verification of travel documents.
8.
Administration of justice and democratic processes:
(a) AI systems intended to be used by a judicial authority
or on their behalf to assist a judicial authority in
researching and interpreting facts and the law and in
applying the law to a concrete set of facts, or to be used
in a similar way in alternative dispute resolution;
(b) AI systems intended to be used for influencing the
outcome of an election or referendum or the voting
behavior of natural persons in the exercise of their vote
in elections or referenda. This does not include AI systems
to the output of which natural persons are not directly
exposed, such as tools used to organize, optimize or
structure political campaigns from an administrative or
logistical point of view.
The Artificial Intelligence Act also covers the use of general-purpose AI (GPAI) models. GPAI models free from systemic risks will only need to meet certain basic requirements, such as those related to transparency. However, models identified with systemic risks must adhere to stricter regulations.
For developers and implementers, if their technology is categorized as unacceptable, they must halt its development immediately. On the other hand, those dealing with high-risk technologies must be ready to fulfill the Artificial Intelligence Act’s stringent criteria:
1. Enroll in the European Union's centralized database.
2. Establish a quality management system that aligns with the Act.
3. Keep comprehensive documentation and records.
4. Undergo necessary compliance assessments.
5. Abie by the limitations on employing high-risk AI systems.
6. Maintain ongoing regulatory compliance and be prepared to prove it when asked.
CNBC’s thorough report on AI regulation highlights governmental concerns about deepfakes — AI-generated falsifications like photos and videos — potentially disrupting this year’s key global elections.
Establishing Artificial Intelligence Governance
The European Union Artificial Intelligence Office has been established recently to play a pivotal role in enacting the Artificial Intelligence Act. It will support Member States' governance bodies, ensuring the rules for general-purpose AI models are applied effectively. The Act empowers the Office to evaluate these AI models, demand information and corrective actions from providers, and impose penalties if necessary. The Office is also tasked with fostering a trusted AI ecosystem, which is expected to yield social and economic advantages. This initiative aims to position the EU as a strategic, unified, and influential leader in AI globally.
Supporting the Office’s regulatory efforts is a scientific panel of independent experts.
Additionally, the Artificial Intelligence Council, comprising Member State representatives, has been formed to provide guidance and support to the European Commission and Member States for a consistent and robust application of the Act.
Finally, a consultative forum has been set up to allow stakeholders, including individuals and organizations, to offer technical advice to the Artificial Intelligence Council and the European Commission.
Penalties for Non-Compliance with the Artificial Intelligence Act
The Artificial Intelligence Act stipulates fines for non-compliance of up to 7% of the offending company’s global annual turnover from the previous financial year, or a fixed sum of €35,000,000.00, whichever is higher. SMEs and startups will face proportionate fines or the fixed sum, but always the lesser amount.
Moreover, providing incorrect, incomplete, or misleading information to the notified bodies or competent national authorities in response to a request is subject to administrative fines of up to 7,500,000.00 EUR, or if the offender is a company, up to 1% of its total global annual turnover for the previous financial year, whichever is higher.
For the assessment of the penalty, the following parameters must be considered:
1. The nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them.
2. Whether administrative fines have already been applied by other market surveillance authorities to the same operator for the same infringement.
3. Whether administrative fines have already been applied by other authorities to the same operator for infringements of other Union or national law, when such infringements result from the same activity or omission constituting a relevant infringement of this Regulation.
4. The size, the annual turnover and market share of the operator committing the infringement.
5. Any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement.
6. The degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate the possible adverse effects of the infringement.
7. The degree of responsibility of the operator taking into account the technical and organizational measures implemented by it.
8. The manner in which the infringement became known to the national competent authorities, in particular whether, and if so to what extent, the operator notified the infringement.
9. The intentional or negligent character of the infringement.
10. Any action taken by the operator to mitigate the harm suffered by the affected persons.
Share with
Related
No items found.
ABOUT US
Licks’ Blog provides regular and insightful updates on Brazil’s political and economic landscape. The posts are authored by our Government Affairs & International Relations group, which is composed of experienced professionals from different backgrounds with multiple policy perspectives.
Licks Attorneys is a top tier Brazilian law firm, speciallized in Intellectual Property and recognized for its success handling large and strategic projects in the country.
ABOUT US
Licks Attorneys Compliance’s Blog provides regular and insightful updates about Ethic and Compliance. The posts are authored by Alexandre Dalmasso, our partner. Licks Attorneys is a top tier Brazilian law firm, specialized in Intellectual Property and recognized for its success handling large and strategic projects in the country.
QUEM SOMOS
O blog Licks Attorneys Compliance fornece atualizações regulares e esclarecedoras sobre Ética e Compliance. As postagens são de autoria de Alexandre Dalmasso, sócio do escritório. O Licks Attorneys é um escritório de advocacia brasileiro renomado, especializado em Propriedade Intelectual e reconhecido por seu sucesso em lidar com grandes e estratégicos cases no país.