EU AI Act compliance: five mistakes to avoid
Ervy Team
5 min read
Most HR and L&D teams have seen something about EU AI Act compliance by now – specifically Article 4, the AI literacy obligation that’s been in effect since February 2025 and applies to any company using AI tools. What’s less clear is what it actually requires. The regulation says staff (and that includes contractors and third parties) must have "a sufficient level of AI literacy" and largely leaves the rest to you.
The EU AI Act compliance bar for Article 4 is achievable and there’s no mandated curriculum, no certification, and no exam. The European Commission confirmed in May 2025 guidance that there’s no single right way to do it, so you have a lot of flexibility.
Most organisations though are using that flexibility as an excuse to do nothing – or to do the wrong thing. And with full EU AI Act enforcement set for August 2026, that’s about to become a much bigger problem.
We’ve gathered the five mistakes we see most often so you can avoid them (the last one will actually get you in trouble).
Mistake 1: You don't know how much AI you're already using
Most organisations underestimate how many AI tools are active in their environment. When HR and L&D teams think about AI literacy training, they think about the obvious ones: Microsoft Copilot, ChatGPT, maybe an AI-powered analytics dashboard. The actual list is much longer, but nobody is keeping track of it.
AI is embedded in tools your teams use every day without thinking of them as AI. Grammarly, Zoom’s and Microsoft Teams’ meeting summaries, Salesforce Einstein, the automated screening layer in your ATS, AI-generated insights in your finance software – most of these tools have introduced AI features in the last two years, often quietly.
Then there’s shadow AI – employees using ChatGPT, Claude, Perplexity, or other AI tools without IT’s knowledge or approval. A 2024 Microsoft survey found that 78% of AI users at work are bringing their own AI tools rather than waiting for officially sanctioned ones. These tools often process sensitive data: client information, internal documents, financial figures, personal data. Without training, employees have no sense of what’s safe to put in and what isn’t.
The EU AI Act requirements apply to all of these tools as well as the ones IT officially approved – and EU AI Act compliance starts with knowing what you’re actually dealing with.
So before you design a training program, do the inventory. Survey IT for sanctioned tools and survey department heads for what their teams are actually using. Check for AI features embedded in existing software. Include anything used by contractors or external partners on your behalf. You will be surprised how long this list gets.
Mistake 2: The generic course tick-box
The most common response to Article 4 is to buy an off-the-shelf "Introduction to AI" course, assign it to everyone, and consider the job done.
This doesn’t satisfy the EU AI Act compliance requirements.
The regulation is explicit that your training should come from your own AI policy and reflect the specific tools your organisation uses, the specific risks those tools carry, and the specific roles of the people using them. A generic course from a vendor’s content library covers none of this. It tells your employees what a large language model is, but it doesn’t tell them what to do when Copilot hallucinates a figure in a client report or how to flag a concern about your AI-powered recruitment screening tool.
Regulators investigating an AI Act breach will ask what training was in place. “We bought a general AI literacy course” is not the answer they will be looking for. If your AI use policy exists as a PDF, DOCX, or webpage, Ervy can turn it into a course in under seven minutes – lessons and quiz questions drawn from your actual content, including any diagrams or schematics in the document. All of it without needing to get IT involved.
Mistake 3: Everyone gets the same training
Even if you’ve built a course from your own AI use policy, you shouldn’t treat AI literacy as a single level of training that everyone receives identically. The EU AI Act compliance requirements are explicit here too: training should take into account the "technical knowledge, experience, education and training" of staff, and the context in which AI is used. Different roles carry different risk profiles, so your HR team, your finance team, and your C-suite shouldn’t sit or click through the same training.
In practice, this means thinking and training in tiers.
General awareness covers everyone in the organisation who might interact with AI tools, even occasionally (basically everyone). What tools the company uses and why, as well as the basic dos and don’ts – what’s safe to put into AI systems and what isn’t, and how to flag a concern.
Regular users need more depth, tailored to their function. Your marketing team using AI for content generation needs to understand copyright and brand risk. Your HR team using AI in recruitment needs to understand bias, discrimination risk, and the specific GDPR implications of processing candidate data through AI tools. Your finance team using AI for forecasting needs to understand data accuracy, liability, and what verification steps are expected before acting on AI outputs.
Power users and decision-makers – people managing AI implementations or making consequential decisions based on AI outputs – need the fullest picture: AI limitations and failure modes, oversight responsibilities, regulatory obligations relevant to their function, and incident reporting procedures.
The practical output of this step is a simple role mapping: which departments use which AI tools, what the specific risks are, and which training tier applies. It doesn’t need to be complicated. But it does need to exist, because without it you can’t demonstrate that your training program was proportionate and role-appropriate, which is exactly what Article 4 expects.
Mistake 4: Treating it as a one-off event
AI literacy is not a project with an end date that you can mark complete and move on.
The EU AI Act requirements don’t work that way. The Commission’s guidance is clear that AI literacy training should be ongoing as the environment keeps changing. Your AI stack will change, existing tools will add new AI features, policies will get updated. Every change comes with new risks and, ideally, should also come with an update in your AI literacy training.
If the last time your company updated its training was more than 6 months ago, chances are, it’s already outdated. If something goes wrong and regulators come knocking, it’s not likely to suffice.
The practical fix is holding short, recurring sessions rather than one long event and updating the content regularly. Tools like Ervy are built to avoid this mistake. Lessons are scheduled once and delivered automatically in Teams on a recurring basis. When your AI policy changes, you can either generate a new course from the updated document or add lessons to the existing one. New hires can be set up to join the training schedule automatically.
Mistake 5: No documentation
There’s no standalone fine for violating Article 4 of the EU AI Act, but a lack of AI literacy training will be treated as an aggravating factor when investigating other AI Act breaches. It’s very similar to health and safety training – you don’t get a fine for skipping training, but you do get one when someone falls off a ladder and the investigation finds staff had last been trained eighteen months ago.
The EU AI Act deadline for full enforcement is August 2026. By August, you need to make sure that you are not only regularly delivering company and role-specific AI literacy training to your staff but also documenting the whole process. Keep a record of which AI tools you use, which roles interact with them, what training was delivered, who completed it, and when it was updated. And there’s no mandated format – a simple spreadsheet works.
There are different tools that can make the recordkeeping part of things even easier. For example, Ervy logs training completion per person and per team – what course was delivered and when, which lessons were completed, when, and whether the quiz questions were answered correctly. All of it is exportable in an xlsx format.
What the EU actually expects
Part of why Article 4 has been ignored is that the EU AI Act has been treated as a future problem, when, in reality, the AI literacy obligation has already been in force since February 2025. If your organisation uses AI systems in the EU, you’ve been subject to it for over a year.
August 2026 is the EU AI Act compliance deadline most teams are tracking. That’s when full enforcement of the broader Act begins, including rules for high-risk AI applications. But you shouldn’t wait until August to start building an AI literacy program. Start doing it today.
All you need to satisfy Article 4 requirements is training that’s built around the AI tools your people use, the risks those tools carry, appropriate depth for different roles, delivered on an ongoing basis, with records.
That’s where Ervy can help: you can run AI literacy training in Microsoft Teams, the same way you’d cover onboarding or compliance. Ervy takes your existing AI use policy, turns it into a microlearning course automatically, and delivers it in two-to-three-minute lessons directly in Teams. Completion is tracked per person, so your documentation takes care of itself.
If you want the full step-by-step process before you get started – inventory, role mapping, what to document, all of it – we put it into a plain-language EU AI Act compliance checklist built specifically for HR and L&D teams. Everything you need to get EU AI Act compliance right.

