As artificial intelligence (AI) transforms industries and societies, the race to establish effective governance frameworks is intensifying. The UK and European Union (EU) are pursuing distinct paths to regulate AI, reflecting different priorities and challenges.
The future of AI still remains uncertain; this article explores the UK’s evolving AI strategy, including the new Labour governments’ proposals to enhance infrastructure, governance, and talent development, alongside the EU’s regulatory journey and the challenges it presents for businesses. By examining these approaches, we aim to provide insight into the balance required to promote responsible AI growth while addressing potential risks and regulatory hurdles.
The Labour Party’s AI Vision
As the UK moves to position itself as a leader in AI, the Labour Party’s general election manifesto set a clear agenda for fostering the AI sector. The party made several key pledges that aim to remove barriers to innovation while ensuring responsible governance of rapidly developing technologies.
One significant focus within the Labour Party’s manifesto was removing planning barriers for new datacentres. This measure seeks to address infrastructure blocks, ensuring that the UK has the capacity to support next-generation AI innovation. A proposed national data library would consolidate existing research programs, driving data-driven public services through improved data accessibility and integration.
Labour also plans to enhance regulatory oversight with a new regulatory innovation office that would unify regulatory functions across government. In the Kings Speech, it was also indicated that certain aspects of voluntary codes for AI developers would be transitioned into statutory provisions.
New Regulatory Directions Post-Election
Since coming to power in July 2024, the Labour government has introduced efforts to regulate AI technologies while promoting economic growth and innovation:
- A New Legislative Approach
In a departure from previous non-binding measures, the King’s Speech on July 17, 2024, proposed a more binding framework for regulating powerful AI models. While an AI Bill was notably absent, the government highlighted its intent to legislate “appropriate requirements” for AI developers, signalling a shift toward stronger oversight and accountability.
- Reforming Data Law
Now under consideration in the House of Lords, the Digital Information and Smart Data Bill promises reforms to data-related legislation, which due to the provisions on automated decision making may include AI. This legislative update seeks to ensure that data laws keep pace with emerging technologies, however it is not yet clear exactly how this will be implemented.
- Shaping the Future
On July 26, 2024, the Department for Science, Innovation, and Technology launched an ‘AI Action Plan’ aimed at leveraging AI to drive economic growth and improve public services. The initiative will focus on assessing infrastructure needs, attracting top talent, and promoting AI adoption across sectors. The plan will be implemented through an ‘AI Opportunities Unit’ designed to transform recommendations into actionable policies.
Challenges of AI Legislation for Businesses
Across the channel, AI regulation in the EU is facing scrutiny. Kai Zenner, a digital policy advisor to MEP Axel Voss, recently provided a critical assessment of the EU AI Act and its implications for businesses:
- Legal Uncertainty
The EU’s regulatory approach is causing hesitation among European companies. Ambiguities and compliance uncertainties are stifling investment, leaving many businesses reliant on US AI technologies instead. This lack of clarity is viewed as a major barrier to adopting a competitive European AI ecosystem.
- Complexity and Compliance Costs
The Act’s framework presents challenges due to the dynamic nature of AI systems. Businesses are required to conduct repeated compliance testing as their systems evolve, resulting in additional costs and administrative burdens. Many businesses are also spending on third-party conformity assessments they do not need due to the unclear requirements in the Act.
- The Burden of an Expanded Compliance Structure
The need for additional roles within companies, such as AI Officers and Cybersecurity Officers (CISOs), combined with existing Data Protection Officers (DPOs), could lead to significant cost increases over time. While Zenner views the Act as a “principles-based, future-proofed, and cooperative law,” he emphasised the complexities and vagueness that make full compliance challenging for many businesses.
Both the UK and EU’s approaches to AI regulation highlight the difficult balancing act required to foster innovation while mitigating potential harms. The UK’s evolving strategy emphasises infrastructure growth and talent development, while Europe grapples with legal complexities and business concerns under its ambitious AI Act. As the regulatory landscape continues to evolve, businesses and policymakers alike must navigate these challenges to ensure that AI serves society responsibly and ethically.
For advice on how to navigate AI regulation in your business, please contact our Regulatory Team.