In the second part of our four-part guide to EU AI Act compliance for North American organizations, we explore the Act’s risk-based approach to classifying AI systems. What applications are prohibited, what constitutes ‘high-risk’ activity, and what systems are exempt?
For details of the AI Act’s timeline and deadlines for its phased implementation, see Part 1 of our blog series – EU AI Act compliance part 1: Timeline and important deadlines
Understanding AI risk categories
The EU AI Act’s risk-based approach to classifying AI systems aims to balance innovation with regulation to prevent harm to health, and ensure safety and fundamental human rights. By assessing risk, the legislation recognizes that not all AI systems pose the same level of threat and that varying levels of control and oversight are required.
AI systems are categorized into different risk levels based on their potential impact, with the burden of compliance increasing proportionate to the risk.
These are the three main categories:
- Prohibited
- High risk
- Low risk
For Canadian and US organizations, these categories apply to any AI systems that affect EU residents or markets, no matter where the system is developed or operated.
Prohibited systems
AI applications in this category are banned due to their unacceptable potential for negative consequences.
High-risk systems
These systems have a significant impact on people’s safety, wellbeing and rights, so are subject to stricter requirements.
Low-risk systems
These systems pose minimal dangers, so have fewer compliance obligations.
AI applications prohibited by the Act
The prohibitions on unacceptable risk AI systems came into force on February 1, 2025 (see the timeline of the phased implementation schedule here).
The European Commission will regularly review the list of prohibited AI applications, with the first review scheduled 12 months after the AI Act came into force.
The table below details the types of AI practices that are the prohibited. These techniques and approaches pose unacceptable risks to health and safety or fundamental human rights, and while some of these practices may be permitted under North American regulations, they are prohibited when serving EU markets.
TYPES OF PROHIBITED AI PRACTICES | DETAILS |
Subliminal, manipulative or deceptive | AI systems that use subliminal, manipulative or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm |
Exploitation of vulnerabilities | AI systems that exploit vulnerabilities related to a person’s age, disability, or socio-economic circumstances |
Biometric categorization | AI applications that profile people based on certain sensitive characteristics (broadly aligned to GDPR special category data) such as race, political opinions, religious or philosophical beliefs, sexual orientation etc, subject to a narrow set of exceptions |
Social scoring | AI systems that evaluate or classify individuals or groups based on social behavior or personal traits, which would cause detrimental or unfavorable treatment of those people |
Risk assessment of individuals committing criminal offenses | AI systems used to assess the risk of an individual committing a crime, based solely on profiling or personality traits. Except when the system is used to strengthen and support human assessments based on objective and verifiable facts, directly linked to criminal activity |
Large-scale facial recognition databases | AI systems using untargeted scraping of facial images from the internet or CCTV footage (with some limited exceptions for law enforcement) |
Inferring emotions in workplaces or educational institutions | Except for AI systems used for medical or safety reasons |
Real-time remote biometric identification (RBI) in public spaces | AI-enabled real-time RBI can only be used in certain situations and only allowed when not using the tool would cause considerable harm. Before deployment, police must conduct a fundamental rights impact assessment and register the system in the EU database |
What constitutes ‘high-risk’ activity?
Most of the AI Act addresses the regulation of high-risk AI systems, which fall into three distinct categories:
- Standalone AI products already covered by Union product safety laws
- AI safety components
- Designated ‘high-risk’ categories
Let’s explore these high-risk categories in a little more detail:
Standalone AI products
This refers to AI systems that are not a component or feature of a larger product, but rather the product in its entirety. Many of these types of products are already regulated by certain EU harmonization laws. Examples include medical devices, heavy industrial machinery, cars, and toys. These are listed in Annex I of the AI Act.
If you develop or deploy AI systems in a sector with tightly managed safety legislation, it’s highly likely the system will be covered here, and you should check the context of the Annex in full.
As these products are already subject to strict safety regulations, they are automatically considered a high-risk category under the AI Act.
AI safety components
This means where an AI system isn’t a standalone product but performs safety-related functions within a product. For example, where an AI system is used for monitoring, controlling, or managing safety features.
Many of these systems are related to products listed in Annex I of the AI Act, such as industrial machinery, lifts, medical devices, motor vehicles etc.
The graphic below details the timeline, including some additional and earlier deadlines for specific provisions.
Designated ‘high-risk’ categories
Certain AI systems not listed in Annex I are also considered high risk.
This defined list includes systems that would significantly impact people’s opportunities and potentially cause systemic bias against certain groups.
These systems fall into 8 broad areas:
Biometrics
Certain biometric processing is entirely prohibited, as detailed above, but all other biometric processing is classified as high risk (with the exception of ID verification of an individual for cybersecurity purposes – for example, Windows Hello and other biometric login systems used in North American workplaces).
Critical infrastructure
- AI systems used as safety components in managing critical digital infrastructure (similar to the list in Annex I) and utility systems – this applies to Canadian and US organizations providing services or infrastructure solutions to EU markets.
Education
- Any AI system determining admissions or evaluating learning outcomes are high risk due to the potential impact on lives (including online learning platforms serving EU students), for example, the risk of perpetuating historic discrimination of women and ethnic minorities.
Employment & management
- Any AI system used for recruitment, job application analysis, and candidate evaluation are considered high risk (including North American companies hiring for EU operations or processing EU candidate data). Also, decision-making AI tools used for performance monitoring, work relationships, or termination of employment are high risk.
Access to essential services
- Systems determining access to essential services such as public benefits like unemployment, disability and healthcare, or private benefits such as credit scoring systems. This includes Canadian and US financial institutions providing services to EU customers.
Law enforcement
- Certain tasks are considered high risk, including using lie detectors or similar biometric tools used for testimony assessment, and systems used to assess the likelihood of an individual reoffending.
Immigration
- Systems used to assess the security risk of migrants entering the EU, or to process and evaluate asylum claims. AI systems used to verify ID documents are exempt from this.
Administration of justice and democratic processes
- This includes AI systems used in legal research or interpreting the law, such as legal databases used by lawyers and judges. Also, systems that could influence voting, like those used to target political ads.
Exceptions to high-risk and prohibited AI systems
The AI Act exempts certain AI systems otherwise considered high risk or prohibited.
Prohibited system exemptions are notably for research and national security.
High-risk system exemptions can apply if the AI system:
- Performs only a narrow procedural task
- Improves on the result of a previously completed human activity
- Detects or monitors bias or other patterns in decision-making, but doesn’t replace human decision-making and is subject to human review
- Is used for a preparatory task relevant to the assessment of an otherwise high-risk task i.e. you can use AI to help you assess your use case
What this means for organizations using high-risk AI systems
For Canadian and US organizations, this often means conducting additional risk assessments beyond those required by domestic regulations.
High-risk AI systems supplied to the EU or affecting EU residents need thorough risk and security assessments and may need EU registration and third-party evaluation. There are also substantial transparency obligations, and users must be clearly informed about how an AI system is deployed and functions. For North American organizations operating globally, this may require maintaining different AI system configurations for EU and non-EU markets.
If you need advice on ensuring your organization’s AI systems comply with EU requirements, while maintaining efficient operations across North American and European markets, please contact our specialized DPO team.
EU AI Act compliance part 3: Scope and obligations
Coming next, in part 3 of our blog series, we cover the obligations of the AI Act in more detail, including who the AI Act applies to and what is required.
Don’t miss out on the latest data protection updates – stay informed with our fortnightly newsletter, The DPIA.

____________________________________________________________________________________________________________
In case you missed it…
- EU AI Act compliance part 1: Timeline and important deadlines
- How GDPR territorial scope impacts North American businesses
- GDPR advise for SaaS companies expanding into EU and UK markets
____________________________________________________________________________________________________________
For more news and insights about data protection follow The DPO Centre on LinkedIn