AI related law concept shown by robot hand using lawyer working tools in lawyers office with legal astute icons depicting artificial intelligence law and online technology of legal law regulations

By Andrew How 

Artificial intelligence is reshaping insurance through advanced analytics and automation. Andrew How explores how regulators in the UK and EU are taking different approaches to AI governance. He highlights why insurers must align innovation with evolving compliance standards to manage risk, ensure fairness, and harness AI’s full potential across jurisdictions. 

Artificial intelligence (AI) is fast becoming the engine room of modern insurance, from dynamic pricing and risk-based underwriting to customer segmentation and behavioural modelling. But as the technology races ahead, regulators on both sides of the Channel are taking slightly different paths.  

The EU’s AI Act 

The EU AI Act (official text here) entered into force in August 2024, with staggered deadlines from 2025 to 2027. It takes a risk-based approach, classifying AI applications into prohibited, high-risk, limited-risk, and minimal-risk categories. Most AI systems used in insurance pricing, underwriting, claims handling, and fraud detection will likely fall into the “high-risk” category—especially those impacting access to financial services or that may influence decisions with legal or significant personal consequences. 

The law also imposes heavy penalties for non-compliance: up to €35 million or 7% of global annual turnover, whichever is higher. 

Specific requirements for insurers began applying from February 2025, including bans on certain AI uses (e.g., social scoring) and AI literacy training. From August 2025, general-purpose AI models and governance structures will be in scope, followed by implementation of full high-risk obligations in 2026–2027. 

The UK’s Sector-Led Model: Flexibility with Friction 

In contrast, the UK government has intentionally avoided creating a centralised AI law, favouring a pro-innovation framework based on existing regulatory principles. The approach emphasises five cross-sectoral principles: safety, transparency, fairness, accountability, and contestability. 

While flexible, this regime has left sectoral regulators – like the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) – to interpret and enforce these principles based on context.  

Key compliance considerations include: 

  • Ensuring fair outcomes in pricing and underwriting, in line with Consumer Duty
  • Avoiding algorithmic bias in automated decision-making
  • Managing risks from third-party AI tools (e.g., cloud or data vendors)
  • Ensuring explainability and audit trails for models impacting customer access to financial services

Meanwhile, the proposed AI (Regulation) Bill, a private member’s bill introduced in the House of Lords, signals rising political appetite for stronger statutory guardrails (see Bill status here). 

Looking Ahead: The Agentic Horizon 

For re/insurers operating across the UK and EU, the divergence in AI governance creates a challenging compliance landscape. An AI model built to UK standards, emphasising agility, proportionality, and sectoral discretion, may not meet the EU’s strict documentation, audit, and transparency requirements. 

Ultimately, for insurers technology enhancements around pricing, decisioning, underwriting and regulation are evolving in tandem, increasing the demand for trusted, expert technology partners that can also demonstrate actionable insights on AI compliance. 

Indeed, the next regulatory test will involve agentic AI – systems capable of making autonomous decisions and dynamically adapting to objectives without constant human intervention, but  – critically – still with expert human oversight.  

This shift from automation to autonomy will also require an evolution in governance structures. Agentic AI: systems capable of proactively making decisions, initiating actions, and adapting based on feedback to achieve specific goals, have immense potential to improve efficiency, customer centricity and personalisation. The potential extends even further when multiple AI agents operate in parallel, dynamically coordinating and adjusting in real time to solve complex problems or pursue shared objectives. 

Navigating, Not Avoiding, Complexity 

The AI regulatory landscape in the UK and EU is evolving, and these two markets are taking distinctly different approaches to balancing innovation and oversight. While the EU imposes strict, top-down compliance obligations, the UK’s sector-led model offers flexibility but introduces ambiguity. For insurers operating across both jurisdictions, this divergence creates added complexity, not only in legal terms but in day-to-day operational decisions.  

The difference between AI ambition and impact often lies in working with partners who understand the full picture, from regulatory nuance to behavioural economics and legacy integration. The best suppliers bring this clarity to every deployment. As the technology moves toward greater autonomy and impact, partnering with trusted technology providers who can ensure compliance, transparency, and performance at scale is essential. 

We firmly believe that regulatory divergence isn’t a roadblock to innovation. It’s a catalyst for more mature, enterprise-wide AI strategies that are as robust as they are agile – and it’s this shift that will define the next phase of insurance transformation.

About the Author

Andrew HowAndrew How is Director of Insurance – UKI at Earnix, leveraging nearly two decades of expertise in P&C insurance across EMEA. He drives growth and innovation through enterprise software solutions, blending deep industry insight with cutting-edge Insurtech and Fintech advancements to transform insurance pricing, underwriting, and customer engagement. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here