Developer Offer
Try ImaginePro API with 50 Free Credits
Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.
UK Justice System Adopts OpenAI Despite Major Risks
Megan Kirkwood is a fellow at Tech Policy Press.
The UK Ministry of Justice & Crown Prosecution Service government office building in Westminster. Shutterstock
UK Government Deepens Partnership with OpenAI
Last week, the UK’s Ministry of Justice announced a significant expansion of its partnership with OpenAI. Following a successful pilot of ChatGPT Enterprise, the government has secured an agreement to formally adopt the technology, which includes a new data residency plan. The government’s press release celebrated the move, stating that allowing OpenAI's business customers to store data on British soil will enhance privacy, accountability, and national resilience against cyber threats.
This agreement builds on a Memorandum of Understanding (MOU) signed earlier this year, giving OpenAI broad influence over UK AI policy. This latest development shows OpenAI effectively placing itself at the front of the line for public service contracts. While details remain sparse, some reports suggest the contract could be worth around £6.75m over two years.
OpenAI confirmed its technology will continue to assist with routine tasks like writing, compliance, research, and document analysis. Deputy Prime Minister David Lammy praised the deal, claiming it enables staff to be "more human" by cutting down on administrative burdens and freeing them up to focus on frontline duties.
Data Sovereignty or a Digital Mirage
The centerpiece of the announcement is the data residency agreement, allowing OpenAI customers to “store their data on British soil for the first time.” The government suggests this will ensure compliance with UK data protection rules and bolster national sovereignty. However, this claim is questionable.
Critically, the US CLOUD Act allows US law enforcement to compel American companies like OpenAI to hand over data stored anywhere in the world, including on UK servers. This means all data hosted in these new UK-based centers could still be seized by US authorities, undermining the very idea of sovereign control.
Furthermore, the economic benefits for the UK are dubious. The vast majority of the deal's value will flow back to OpenAI in the US. It's widely understood that data centers create very few local jobs. Meanwhile, UK-based AI companies are being sidelined. Tim Flagg, chief executive of the trade association UKAI, has criticized the government's focus on US Big Tech, highlighting an imbalance that harms the domestic AI industry.
The Push for AI Across UK Public Services
The Ministry of Justice's deal is part of a broader government insistence that all public institutions adopt AI. The Department of Health and Social Care, for example, has also been a major focus for AI implementation.
In healthcare, the government has pushed for AI-powered transcription services during patient appointments to automate note-taking and letter drafting. The goal is to boost productivity without increasing costs. However, trials evaluating these tools often overlook the harms of incorrect outputs from large language models, which are an inherent risk of the technology. In a medical setting, such errors could have life-threatening consequences, with official NHS guidance noting that liability remains a "complex and largely uncharted" area.
AI in Justice: Efficiency vs Ethical Risks
The Ministry of Justice has gone all-in on AI, launching a dedicated "Justice AI Unit" to embed AI across the system. The unit’s website, which resembles a tech product page, outlines plans for AI assistants and even a single digital identity system for managing criminal records.
More alarmingly, the ministry has set its sights on using predictive AI to stop prison violence before it happens by analyzing prisoner data. This approach seems to ignore the extensive documented harms of predictive crime technology, which is known to be largely inaccurate and to reproduce historical biases.
In their book AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor detail how these models cause harm by using flawed metrics, such as arrest data instead of crime data, which amplifies racial disparities in policing. Despite the Ministry of Justice's claim that AI will only “support, not substitute, human judgment,” the push for efficiency risks automating discrimination and stripping away accountability.
The drive to inject AI into the justice system is a clear response to the government's mandate to push economic growth by supporting the AI and LawTech sectors. While boosting the industry may be a goal, it points to a troubling trend of outsourcing critical public infrastructure to US Big Tech, in this case to a company that has shown no hesitation in aligning itself with antidemocratic forces.
Compare Plans & Pricing
Find the plan that matches your workload and unlock full access to ImaginePro.
| Plan | Price | Highlights |
|---|---|---|
| Standard | $8 / month |
|
| Premium | $20 / month |
|
Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.
View All Pricing Details

