Navigating AI Regulation: UK EVIDENCE Requirements and EU Challenges

Evolving AI Regulatory Expectations in the UK and EU

Navigating AI regulation has become a critical priority for financial firms. With UK evidence requirements tightening and EU challenges emerging under the AI Act, firms must adapt quickly. Recently, December 2025 marked a clear shift in the regulatory tone on AI. For UK financial services firms, Nikhil Rathi made it clear: there is no new “AI rulebook” from the FCA, but supervisory expectations are tightening quickly under existing frameworks.

The FCA has confirmed it does not currently plan AI-specific rules. However, firms must demonstrate that AI-enabled activities comply with established requirements. Such as Consumer Duty, SM&CR and operational resilience. In practice, this means being able to evidence how AI supports good customer outcomes, where accountability sits for AI-driven decisions, and how critical AI dependencies are mapped and controlled. The PRA’s continued engagement on model risk management reinforces the same direction of travel: supervisors will assess governance, validation, and the ability to explain decisions, not just the technology.

Cross-Border Considerations: The EU AI Act

Similarly, the EU AI Act still matters for many UK firms. If you have EU entities, EU clients, or EU-facing products (or rely on vendors building to EU standards), you will need to reconcile UK outcomes-based expectations with more prescriptive EU governance and documentation requirements.

For AI used in areas like creditworthiness and scoring, existing prudential and consumer credit frameworks may cover much of the ground, but EU overlays can add additional obligations in practice.

Q1 2026 priorities

  1. Complete an AI inventory (including vendor-embedded and “shadow” use). An AI inventory identifies hidden risks and ensures compliance with internal policies and regulatory expectations.
  2. Map each use case to UK obligations (Consumer Duty, SM&CR, operational resilience, third-party risk). Mapping each use case to UK obligations ensures that AI activities directly support regulatory compliance, accountability, and robust operational standards.
  3. Strengthen controls for higher-risk use cases (testing, monitoring, auditability, escalation). Stronger controls on high-risk AI applications reduce harm, improves oversight, and supports transparent decisions.
  4. Pressure-test provider concentration and resilience, including exit planning. Testing provider resilience ensures business continuity and reduces single points of failure, particularly in critical AI dependencies.
  5. Upgrade fraud controls as deepfakes and voice cloning accelerate. Upgrading fraud controls is essential as new threats like deepfakes and voice cloning become more prevalent, safeguarding firms and customers against sophisticated fraud risks.

Leaman Crellin can support with regulatory gap analysis, governance design, third-party risk strengthening, and consultation responses turning AI ambition into defensible, supervisor-ready evidence

Neil Makwana Headshot, Principal Compliance Consultant - Leaman Crellin

Neil Makwana

Neil is a Principal Compliance Consultant with experience at the UK FSA/PRA, Barclays, and digital assets firms including OKX and Copper Technologies. He specialises in regulatory licensing, structural reform, risk assessment, and regulatory advocacy. Neil holds an MBA from CASS Business School.

Please Login to Comment.