What is the EU AI Act
The EU AI Act is Regulation (EU) 2024/1689 of the European Parliament and of the Council, published in June 2024. It is the world's first comprehensive legal framework for artificial intelligence.
The regulation applies to any company operating in the European Union that develops, deploys or uses AI systems. It does not matter whether the company is a technology provider or simply a user — if you use AI, you are covered.
The logic of the regulation is straightforward: the greater the risk an AI system poses to people, the stricter the rules. Low-risk systems have few obligations. High-risk systems have many. Some are prohibited altogether.
Key deadlines you need to know
The EU AI Act does not come into force all at once. The obligations are phased:
- 2 February 2025 The AI literacy obligation comes into force (Article 4). All companies using AI must ensure that the personnel involved have a sufficient level of knowledge about AI. This obligation is already active.
- 2 August 2025 The prohibitions on unacceptable AI practices come into force (Article 5). Prohibited systems must be discontinued by this date.
- 2 August 2026 Full application of the regulation, including all rules for high-risk systems. National market surveillance authorities begin enforcement and can impose penalties.
Important note: The AI literacy obligation (Article 4) has been in force since February 2025. Most companies have not yet taken any action. Enforcement by national authorities begins on 2 August 2026.
What your company needs to do
Regardless of size, there are four fundamental steps to prepare your company:
1. Map AI usage across the company
The first step is knowing what AI systems the company actually uses. This includes obvious tools like ChatGPT or Copilot, but also AI features embedded in software you already use — CRMs, email marketing platforms, accounting tools, HR software.
Many companies use more AI than they think. A complete inventory is essential before any other action.
2. Classify the risk level
With the inventory done, each AI system must be classified according to the regulation's 4 risk levels (see next section). The classification determines which obligations apply. Most common SME tools fall under the minimal or limited levels, but systems used in recruitment, credit or healthcare may be high risk.
3. Train your team
Article 4 requires both providers and deployers of AI to ensure a sufficient level of AI literacy for the personnel who operate or are affected by these systems. Training must take into account technical knowledge, experience, academic background, context of use and the persons affected by the systems.
A generic online course is not enough. The regulation is clear: training must be appropriate to the actual context of use within the company.
4. Document everything
Compliance is about proof. Document what AI systems you use, how you classified them, what training you gave your team, when and to whom. If you are audited, you need to demonstrate that you assessed the risks and took proportional measures.
The 4 risk levels
The EU AI Act classifies AI systems into four levels, from most restrictive to most permissive:
Unacceptable Risk
Prohibited systems. These include subliminal manipulation, social scoring, real-time biometric identification in public spaces (with limited exceptions) and exploitation of vulnerabilities of specific groups.
High Risk
Systems with strict obligations. Examples from the regulation: AI in recruitment and workforce management, credit assessment, critical infrastructure, law enforcement and education.
Limited Risk
Systems with transparency obligations. Examples: chatbots (must inform users they are interacting with AI), systems that generate synthetic content (deepfakes must be identified).
Minimal Risk
The majority of AI systems. No specific obligations beyond general literacy (Article 4). Includes spam filters, product recommendations and AI-powered productivity tools.
The classification is not abstract — it has practical consequences. A high-risk system requires conformity assessments, detailed technical documentation, human oversight and registration with the authorities. A minimal-risk system only requires that your team knows what they are using.
Prohibited practices you should know about
Article 5 of the EU AI Act defines AI practices that are expressly prohibited. The most relevant for businesses:
- Subliminal or deceptive manipulation — AI systems that use subconscious manipulative techniques to distort a person's behaviour in a way that causes or is likely to cause harm
- Exploitation of vulnerabilities — systems that take advantage of a person's age, disability or social or economic situation to distort their behaviour
- Social scoring — evaluation or classification of persons based on their social behaviour or personal characteristics, where this leads to disproportionate detrimental treatment
- Real-time biometric identification — in publicly accessible spaces for law enforcement purposes, except for very limited exceptions provided in the regulation
The prohibition of these practices comes into force on 2 August 2025. If your company uses any system that may fall into these categories, it should be discontinued before that date.
Need help preparing your company?
D'One provides the full assessment: we map AI usage across your company, classify the risks, train your team and prepare the necessary documentation. Everything tailored to the actual context of your business.
Get in touch