The UK takes a pro-innovation, sector-specific approach to AI regulation rather than a single comprehensive law.
Established in 2023, the UK AI Safety Institute conducts research and evaluations of frontier AI models. It focuses on evaluating models for dangerous capabilities, working with AI developers on pre-release safety testing, and publishing safety research.
Rather than creating new AI-specific legislation, the UK relies on existing regulators (FCA, Ofcom, CMA, ICO) to apply five cross-cutting principles to AI in their sectors: safety, transparency, fairness, accountability, and contestability.
The UK GDPR (retained from EU law post-Brexit) governs AI systems processing personal data. The ICO provides specific guidance on AI and data protection, including requirements for automated decision-making and data protection impact assessments.
Our methodology evaluates vendors on alignment with UK AI Safety Institute expectations, sector regulator guidelines, and UK GDPR compliance.