Remember when AI was something you’d read about in think-pieces and nod along to knowingly? That feels like a long time ago. Today, AI is embedded in everything from loan approvals to email drafts, and compliance teams are right in the middle of working out what that actually means for regulation.
The numbers tell the story. AI adoption across UK businesses has surged, with financial services, legal, and healthcare leading the charge. But with adoption comes scrutiny — and the regulatory landscape has shifted significantly over the past eighteen months.
Where Things Stand in the UK
The UK has taken a deliberate path: rather than one sweeping AI law, it’s asking existing regulators to handle AI within their own sectors. The FCA oversees AI in financial services. The ICO handles data protection implications. Ofcom deals with online safety. You get the idea.
This isn’t an accident. When the government published its AI Opportunities Action Plan, the message was clear — the UK wants to be seen as a place where AI innovation is welcomed, not bogged down in red tape. Regulators have been encouraged to promote AI adoption in their sectors, not just police it.
But here’s where it gets interesting. A dedicated UK AI Bill has been in the works, though it’s been pushed back to summer 2026. The delay reportedly stems from two pressures: aligning with the US approach, and heated debate over how AI interacts with copyright — particularly how training data is sourced and whether creators can opt out.
Meanwhile, the Data (Use and Access) Act 2025 received Royal Assent in June 2025. It touches on automated decision-making and smart data schemes, but notably doesn’t resolve the copyright question. Instead, the government is required to produce an economic impact assessment and report on the use of copyright works in AI development by early 2026.
And Then There’s the EU
If you operate across borders — and most firms of any size do — you can’t ignore the EU AI Act.
August 2025 was the big milestone: rules for general-purpose AI (GPAI) models came into effect. The European Commission published its Code of Practice for GPAI providers alongside detailed guidelines on compliance obligations. From August 2026, the Act becomes generally applicable and requirements around high-risk AI systems kick in — think credit scoring, recruitment tools, biometric identification. Fines can reach €35 million or 7% of global turnover.
The UK’s sector-by-sector approach and the EU’s comprehensive framework are fundamentally different philosophies. For compliance teams at firms operating in both markets, this means running two parallel compliance programmes — one that satisfies the FCA’s principles-based expectations, another that ticks the EU’s more prescriptive boxes.
What the UK Regulators Are Actually Doing
This is where it gets practical. Here’s a snapshot of where the key regulators stand:
The FCA has been perhaps the most vocal. CEO Nikhil Rathi has made clear that firms won’t face bespoke new AI rules — the FCA is “technology-agnostic, principles-based and outcomes-focused.” But that doesn’t mean a free pass. The Senior Managers & Certification Regime still holds individuals accountable when AI goes wrong, and the FCA has launched an “AI Live Testing” scheme where firms can test AI models in real-world conditions with synthetic data and regulatory support. They’re also developing a statutory Code of Practice jointly with the ICO.
The ICO published its first dedicated AI and Biometrics Strategy in June 2025, focusing on three high-risk areas: foundation model development (and how personal data is used in training), automated decision-making (particularly in recruitment), and facial recognition technology. A statutory code of practice on AI and automated decision-making is in the works. The ICO has also been running voluntary audits of AI-powered recruitment tools — and the findings weren’t flattering. Some tools were filtering candidates by protected characteristics without a lawful basis.
Ofcom has been grappling with how generative AI chatbots fit within the Online Safety Act. The answer isn’t entirely clear yet — particularly for single-user chatbots that don’t involve interaction between users. Ofcom has already issued fines under the OSA to operators of AI-powered services that failed to implement age assurance measures.
The Practical Compliance Challenges
Regulation aside, what should actually be keeping compliance professionals up at night?
Data protection remains the big one. AI systems are hungry for data, and the interaction between AI training and UK GDPR is still being tested. The ICO has been clear that it will scrutinise how foundation model developers handle personal information — and its supply chain focus means both developers and the organisations deploying their tools face potential scrutiny.
Bias and fairness. It’s one thing to have an AI policy; it’s another to demonstrate that your AI systems don’t systematically disadvantage certain groups. The EHRC has updated its guidance for public sector bodies on assessing the equality impact of AI, and private sector firms should expect similar expectations to flow through.
Transparency and explainability. If an AI system denies someone credit, or flags them for additional AML checks, regulators will expect you to explain why. “The algorithm decided” isn’t going to cut it.
What We’d Recommend
Based on where things stand right now:
Map your AI exposure. You can’t govern what you haven’t identified. Audit where AI is being used across your organisation — including third-party tools your teams may have adopted without telling anyone.
Build governance that matches the risk. Not every AI tool needs the same level of oversight. A chatbot answering FAQ is different from a model making lending decisions. Focus your resources where the risk is highest.
Train your people. Your compliance team needs to understand how AI actually works — not at a technical level, but enough to ask the right questions. The businesses getting this right are the ones where compliance and technology teams actually talk to each other.
Watch both regimes. If you have any EU exposure, start preparing for the August 2026 high-risk AI requirements now. The compliance burden is significant and the deadlines are real.
Don’t wait for the UK AI Bill. The regulatory direction of travel is clear enough to act on today. The firms that treat AI governance as a competitive advantage rather than a compliance burden will be the ones best positioned when the rules do land.
The AI regulatory landscape isn’t going to settle down anytime soon. But that’s not a reason to wait — it’s a reason to get your house in order now.
Maximise your compliance!
Ensure your team always stays compliant, knowledgeable, and motivated to drive your organization forward.
Boost your team Free training trialsMaximise your compliance!
Ensure your team always stays compliant, knowledgeable, and motivated to drive your organization forward.
Boost your team Free training trials