EU AI Literacy

By Jonathan Armstrong

AI is transforming business, but the understanding of AI is not universal. Jonathan Armstrong, Partner at Punter Southall Law, outlines the new legal requirements under the EU AI Act and explores how companies will need to adapt, explaining that organisations that fail to train staff properly risk compliance headaches, liability, and reputational damage.

Getting ahead of EU AI literacy requirements – how businesses can stay compliant and competitive

In most companies, AI is being used in business functions from HR and marketing to customer service. Figures reveal 78% of global companies use AI, with 71% deploying GenAI in at least one function[i]. However, often employees don’t fully understand how these tools work, and this gap can no longer be ignored.

The EU AI Act, particularly Article 4, addresses this by making AI literacy a legal requirement. Since February 2025, any organisation operating in the EU, or offering AI-enabled services to EU markets, must ensure their employees, contractors, and suppliers have a sufficient understanding of the AI tools they use. It is not enough to deploy technology responsibly; organisations must demonstrate that their workforce knows what they are doing.

What’s more, AI literacy isn’t just for developers or data scientists. HR teams using AI in recruitment, marketing teams using Generative AI for campaigns, and customer service staff managing chatbots are all included. Third-party contractors and vendors fall under the same obligations.

The European Commission defines AI literacy as the skills, knowledge, and understanding required to interact with AI responsibly. This includes:

  • Knowing how AI systems function and the data they use
  • Recognising risks such as bias, hallucinations, or discrimination
  • Understanding when and how human oversight is needed.
  • Being aware of legal obligations under the EU AI Act and other relevant frameworks

Why businesses can’t afford to ignore it.

Some organisations may assume AI literacy does not apply to them because they are not in tech. But if you deploy AI systems, you are in scope. Even seemingly low-risk applications, like a customer service chatbot can create legal and reputational exposure if misused.

The risks extend to Shadow AI, too. AI bans rarely work; employees often turn to personal devices, creating hidden risks. This means that universal staff training and clear policies are not just sensible, they are essential.

There is also a generational aspect. Digital natives often find the tools they need via social media or search. Without proper guidance, this can increase organisational risk. A well-planned AI literacy programme mitigates misuse and strengthens compliance.

Who do the rules apply to?

Article 4 covers any organisation using AI in the EU, even if based elsewhere, including UK businesses deploying AI tools in EU operations or offering AI-enabled services to EU customers.

Non-compliance is not limited to the IT team. Misleading chatbots or biased hiring algorithms can create liability for the whole business. Regulators are paying attention, and complaints could be lodged with national authorities or even GDPR regulators if personal data is misused. Examples already exist, from social media firms to UK dating apps that used AI-generated icebreakers.

Consequences of non-compliance

While AI literacy obligations came into effect on 2 February 2025, enforcement by national authorities begins on 3 August 2026. Each EU Member State will determine enforcement approach and penalties, considering factors like severity, intent, and negligence.

The European AI Office provides guidance, expertise, and coordination but does not enforce Article 4 directly. For now, the primary risks for organisations are civil action, pressure groups, and reputational damage.

As a result, businesses can’t wait until 2026 as regulators are already planning audits and enforcement and litigation risks exist already. Preparation means addressing both governance and culture.

Here are five steps for legal and compliance teams:

  1. Map your AI estate
    Audit all AI systems, whether in-house or third-party, covering decision-making, customer interactions, and content generation.
  2. Develop targeted AI literacy training
    Training must be role specific. HR teams using AI in hiring, for instance, need to understand bias, data protection, and explainability.
  3. Review contracts and third-party relationships
    Ensure vendors meet AI literacy standards and reflect these obligations in contracts.
  4. Create internal AI policies
    Set clear rules for AI use, approval processes, and human review. Treat this with the same rigor as data protection or anti-bribery frameworks.
  5. Engage the board and embed a responsible AI culture
    AI is now a board-level issue. Leadership must set expectations around responsible innovation, transparency, and compliance.

Article 4 signals a regulatory shift: businesses must now prove that their people understand AI, not just deploy it responsibly. Like GDPR reshaped data handling, the EU AI Act is transforming how AI is implemented, monitored, and explained across the workforce. What was once best practice is now a legal requirement, and getting ahead of it is the smartest move any organisation can make.

About the Author

Jonathan ArmstrongJonathan Armstrong is a lawyer at Punter Southall Law working on compliance & technology. He is also a Professor at Fordham Law School. Jonathan is an acknowledged expert on AI and he serves on the NYSBA’s AI Task Force looking at the impact of AI on law & regulation.

Reference
[i] https://explodingtopics.com/blog/companies-using-ai

LEAVE A REPLY

Please enter your comment!
Please enter your name here