top of page

AI Compliance Made Simple: What the EU AI Act Means for You (Even in the UK)

Updated: Jun 4

The EU has introduced the world’s first major AI law. Here's what it means for your business - even if you're not based in Europe.

EU AI ACT

Could your AI chatbot get you into legal trouble? With the EU’s new AI Act the answer may be yes, even if you're in the UK.


In June 2024, European lawmakers passed the EU AI Act, a landmark regulation on how artificial intelligence systems should be developed, sold, and used. It’s the first of its kind and applies to companies inside the EU and any organisation doing business there.


So, what does this mean in practice? In this post, I’ll explain the rules, give you practical steps to stay compliant, and explain the risk levels. Whether using AI in recruitment, marketing, healthcare, or just experimenting with ChatGPT, it’s time to understand your responsibilities.


What Is the EU AI Act?

The EU AI Act introduces a tiered, risk-based framework to govern AI.

EU AI Act tiered risks

Officially in force from August 2024, it classifies AI systems based on how likely they are to cause harm.


The idea is simple: the more serious the risk, the stricter the rules.


Why you should care (even outside the EU):


  • If your AI tools reach EU customers, this law applies to you.

  • It sets the tone for future AI legislation globally.

  • It includes requirements for developers of large models like ChatGPT.


The Four Risk Categories Explained

A visual in the official briefing outlines how the Act breaks AI into four levels. Here's the short version:

EU AI Act Unacceptable Risk

Unacceptable Risk – Banned outright, AI systems that are clearly abusive or dangerous are not allowed. These include:


  • Social scoring systems

  • Emotion recognition in schools or workplaces

  • Predictive policing based solely on profiling

  • Systems that infer sensitive traits like religion or orientation from data

  • Scraping images online to build facial recognition databases







High Risk – Strictly regulated. These are systems that can seriously affect people’s rights or safety. Examples:

EU AI Act High Risk


  • Hiring tools that screen candidates

  • Credit scoring or insurance assessments

  • AI used in healthcare or education


To use them legally, you’ll need:


  • A full risk assessment

  • Clear documentation

  • Human oversight and audit trails

  • Ongoing monitoring once live






EU AI Act Limited Risk

Limited Risk – Disclose that you’re using AI Think chatbots, deepfakes, or AI-generated images. You don’t need a licence, but you do need to:




  • Clearly say when the content is AI-generated

  • Inform users they’re interacting with a machine











Minimal Risk—There is no red tape (for now) for tools like spam filters, autocomplete, and grammar suggestions. There is no new paperwork, but basic best practices still apply.


Quick Self-Check: Are You Using High-Risk AI?

Ask yourself:

  • Does your system screen job applicants or students?

  • Does it evaluate credit, loans, or insurance?

  • Is it used in diagnosis, triage, or health advice?

  • Does it help decide who gets public services?


If yes, then:

  • You must carry out a conformity assessment

  • Keep thorough records

  • Implement human review where needed

  • Add your system to the EU’s high-risk AI registry


What About ChatGPT and Large AI Models?

The Act includes rules for foundation models like ChatGPT. These large-scale systems must:


  • Be transparent about training data

  • Include safety and copyright safeguards

  • Notify the EU if they exceed a certain compute threshold (10²⁵ FLOPs)


But what about everyday users? If you're using ChatGPT in a casual or creative setting, no problem. But using it in high-stakes areas (like mental health or hiring)? Then you might fall under the Act's high-risk rules.


Rule of thumb: It’s not what tool you use—it’s how you use it.


What UK Businesses Need to Know

Even if you’re not based in Europe, the EU AI Act can apply to your business if:


  • EU customers use your AI tools

  • You offer services that include AI within the EU


UK examples impacted:

  • A recruitment firm shortlisting candidates from Spain

  • A chatbot on your website used by German visitors

  • An app advising French users on their finances


The UK’s approach to AI is lighter-touch, but that gap is narrowing. As with GDPR, businesses will soon be expected to show they’ve done the right thing—even if no one’s watching.


UK checklist:

  • Map where your AI systems are used

  • Review their purpose and possible risks

  • Build documentation early

  • Add human checks where needed


Staying Safe: AI Compliance Best Practice

You don’t need to wait for a fine to get smart about compliance. Here’s what I recommend:


  • Tell people when AI is involved

  • Log decisions, prompts, and model outputs

  • Include human review, especially in sensitive tasks

  • Use sandboxes to test before going live

  • Retrain models with quality, unbiased data


Regulators encourage AI sandboxes—safe spaces to test new tech without the full legal burden.


Evidence Block: AI Risk Classification Summary

Risk Category

Examples

Legal Requirements

Unacceptable Risk

Social scoring, emotion recognition in schools

Banned

High Risk

CV screening, healthcare AI, credit scoring

Conformity assessment, oversight, transparency

Limited Risk

Chatbots, deepfakes, and AI content generation

Transparency notices required

Minimal Risk

Spam filters, autocorrect, writing assistants

No specific requirements


From the Professor’s Desk

I recently created an AI tool to score CVs and Cover Letters for a colleague. This piece of software could fall into the High-Risk category. I need to ensure it is (and be able to demonstrate) objective and does not unfairly impact candidates. If you are an SME, these issues now become live for you. Understanding what and how you are using these tools has become a business-critical activity.


Final Thought: Get Ahead, Stay Ahead

The EU AI Act isn’t red tape—it’s a reality check.


It can help your business stand out for integrity, build client trust, and avoid sleepless nights worrying about legal fallout.


If you’re already using AI, this is your signal to act. And if you haven’t started yet, it's better to build it right from day one.


Take the Next Step

Ready to future-proof your AI strategy?


Let’s build smarter, safer AI—together.


📌 Legal Disclaimer

The information provided in this document is for general guidance only and does not constitute legal advice. While care has been taken to ensure accuracy, regulations such as the EU AI Act may be interpreted differently across jurisdictions and are subject to change. You should always consult a qualified legal professional to assess your specific obligations and ensure compliance with applicable laws.


References

  1. European Parliamentary Research Service. (2024, September). Artificial Intelligence Act (EPRS_BRI(2021)698792_EN). European Union. https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2021)698792

  2. European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) – COM(2021) 206 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

  3. Google. (2024, March 5). What web creators should know about our March 2024 core update. Google Search Central Blog. https://developers.google.com/search/blog/2024/03/core-update-spam-policies

  4. WhitePress. (2024, December 12). Mastering Google's helpful content guidelines in 2025. https://www.whitepress.com/en/knowledge-base/2227/google-helpful-content



Comments


bottom of page