Our four-part guide to EU AI Act compliance explores what North American organizations need to know about the upcoming legal obligations when rolling out certain artificial intelligence (AI) technologies under the EU’s landmark AI Act.
If your organization operates in or serves EU markets and has AI-driven chatbots to handle customer inquiries, develops predictive algorithms for credit risk, or uses image recognition software, the EU’s AI Act may impact how you handle data.
Understanding the requirements of the AI Act and what will apply to your organization is crucial for compliance.
In our four-part blog series, we cover:
- Timeline and deadlines
- What constitutes a high-risk activity?
- Who has to comply with the AI Act?
- Strategies for achieving AI Act compliance
EU AI Act compliance part 1: Timeline and important deadlines
The AI Act was approved by the European Council in May 2024. It has a phased implementation schedule over two years, designed to give organizations time to make the necessary changes for compliance.
The new legislation applies to public and private organizations operating in the EU that develop, deploy, or use AI systems in the EU’s single market. For North American organizations, this includes companies doing business in the EU or providing AI-powered services to EU customers, as well as institutions, government bodies, research organizations and any others involved in AI-related activities that impact EU markets.
How the AI Act and the GDPR work together
David Smith, DPO and AI Sector Lead explains:
‘In many cases the AI Act and the GDPR will complement each other. The AI Act is essentially a product safety legislation designed to ensure the responsible and non-harmful deployment of AI systems. The GDPR is a principles-based law, protecting fundamental human privacy rights.’
When did the AI Act come into force?
The AI Act’s finalized text was published in the Official Journal of the European Union on July 12, 2024. It officially entered into force 20 days after publication on August 1, 2024, with the enforcement of most of its provisions starting on August 2, 2026.
The graphic below details the timeline, including some additional and earlier deadlines for specific provisions.

August 1, 2024: The AI Act becomes law
February 1, 2025 (+6 months)
Prohibitions on unacceptable risk AI systems apply six months after the AI Act became law.
Banned AI practices are those deemed to pose unacceptable risks to health and safety or fundamental human rights. We will cover prohibited AI applications in more detail in our next blog.
With the deadline for compliance on unacceptable risk AI systems already past, organizations should evaluate their risk exposure in this area urgently if they haven’t yet done so.
May 1, 2025 (+9 months)
The AI Office will finalize the codes of conduct to cover the obligations for developers and deployers of AI systems. These codes will provide voluntary guidelines for responsible AI development and use.
August 1, 2025 (+12 months)
The rules for providers of General Purpose AI (GPAI) will come into effect and organizations will need to align their practices with these new rules. GPAI refers to advanced AI systems that can perform a wide range of tasks. These include high-compute models where training contains more than 10^25 FLOPS, such as ChatGPT.
In addition, the first European Commission annual review of the list of prohibited AI applications will happen 12 months after the AI Act enters into force.
February 1, 2026 (+18 months)
The European Commission will issue implementing acts for high-risk AI providers. This means organizations using high-risk AI systems must follow a standard template to monitor the AI systems after deployment.
The monitoring plan will help to identify and address any issues or risks, promptly.
August 1, 2026 (+24 months)
The remainder of the AI Act will apply, including regulations on high-risk AI systems listed in Annex III* of the AI Act. These systems include those related to biometrics and cover technologies such as fingerprint recognition, facial recognition, iris scanning and voice authentication.
We cover high-risk AI systems in more detail in our next blog.
EU Artificial Intelligence Act Annex III
August 1, 2027 (+36 months)
Regulations for high-risk AI systems stipulated in Annex I** become effective.
EU Artificial Intelligence Act Annex I
By the end of 2030
There are some minor exceptions for certain complex public sector systems that have a longer compliance timeline.
Coming up next…
EU AI Act compliance part 2: What is ‘high-risk’ activity?
Stay tuned for the second blog in our four-part series, which covers all you need to know about prohibited AI applications and what is categorized as a high-risk activity – stay tuned!
In the meantime, should you require any advice on EU or UK jurisdiction data protection, our team of expert DPOs can help. We offer a wide range of outsourced privacy services, including AI Governance support for North American organizations operating in EU markets. CONTACT US
For more privacy updates and breaking news, sign up to our fortnightly newsletter.

____________________________________________________________________________________________________________
In case you missed it…
- How GDPR territorial scope impacts North American businesses
- GDPR advise for SaaS companies expanding into EU and UK markets
- GDPR Representative: Do you need one?
____________________________________________________________________________________________________________
For more news and insights about data protection follow The DPO Centre on LinkedIn