UK AI Copyright Battle Heats Up
The UKs AI Ambition and Initial Copyright Plans
Following its ascent to power last year, the UK Labour government publicly committed to fostering a pro-growth and pro-innovation environment for Artificial Intelligence, with the ambitious goal of establishing the nation as an AI “superpower.” A key component of this strategy involved updating the country’s copyright laws.
Creative Backlash and Policy Reversal
However, this move was met with significant resistance from rightsholders, who perceived the proposed changes as an existential threat to their work and income. They accused the government of favoring large US tech corporations. The situation escalated into a public relations disaster when high-profile figures, including musicians Elton John, Paul McCartney, and Dua Lipa, voiced their criticisms, thereby casting a shadow over the future direction of UK AI policy.
The UKs Unique and Restrictive Copyright Stance
After Brexit, the UK chose not to adopt the 2019 European Union Copyright Directive. This directive permits commercial text and data mining (TDM) of copyrighted material, provided that rightsholders can opt out. Consequently, the UK's current copyright framework is more restrictive than those in both the US and the EU, permitting TDM solely for non-commercial research purposes.
Proposed Reforms and Widespread Dissatisfaction
This restrictive stance places the UK at a competitive disadvantage in the rapidly evolving field of AI. Studies indicate a clear correlation between more permissive copyright laws and accelerated AI innovation. In an attempt to address this, the government announced plans in December to ease copyright protections by introducing a rights reservation model, similar to the EU's opt-out system.
This proposal, however, failed to satisfy key stakeholders. Major AI companies viewed it as an ongoing hindrance to AI development and investment in the UK. OpenAI, for instance, advocated for the US model of fair use, which allows for broader copyright exemptions, rather than the EU's opt-out approach.
Parliamentary Resistance and Calls for Transparency
OpenAI's recommendation found little support among politicians, most of whom sided with the creative industries. The House of Lords repeatedly rejected the Data Use and Access Bill — a bill that became a focal point for protesting the proposed copyright reforms — before it eventually passed last week. A key demand from the Lords was the inclusion of transparency amendments, compelling AI firms to disclose the data used to train their models.
Government Scrambles for a Solution
The government now appears to be in a difficult position. UK Technology Secretary Peter Kyle recently expressed regret over initially favoring the opt-out option. Nevertheless, Kyle also reaffirmed his belief that the current UK copyright regime is “not fit for purpose.”
In an effort to mitigate the backlash, ministers are now authorizing cross-industry groups to produce “technical reports” on copyright and AI within nine months. The goal is to find practical ways for creators to opt out of AI training effectively. Kyle has emphasized that transparency will be the “foundation” of the government's approach.
The Challenge of Enforcing Opt Out Mechanisms
A significant hurdle is how to enforce the opt-out mechanism effectively. A recent report by the EU Intellectual Property Office highlights several difficulties. Marcel Mir, an AI and IP expert at the Center for Democracy and Technology in Brussels, noted, “We already see problems with the enforcement of the opt-out right in the EU, including an unclear definition of ‘machine-readable’ means of opting out and insufficient transparency requirements. It is now almost impossible for rightsholders to know if their work was used to train a model.”
Fundamental Questions Remain Unanswered
Beyond enforcement, broad and fundamental questions persist. Transparency measures alone do not resolve the underlying tension between fostering innovation and protecting the rights of creators. Key unresolved issues include: What level of compensation, if any, should be given to creators whose work is used to train AI models? How can the public interest be best served, moving beyond the demands of specific interest groups? And what defines a copyright system that is “fit for purpose” in the age of AI?
Legal Battles Take Center Stage
With political deadlock, these complex questions are increasingly being addressed in the courts. A notable case is Getty Images’ lawsuit against Stability AI, which commenced this month at London’s High Court. Getty Images accuses Stability AI of unlawfully scraping millions of its images for training its image-generation model. In defense, Stability AI’s lawyers argue that the lawsuit represents an “overt threat” to the AI industry.
The Urgent Need for Clarity
Such legal disputes have the potential to continue for years. In the interim, the intertwined issues of copyright and AI demand swift resolution if the UK hopes to realize its ambition of becoming a global AI superpower.
Oona Lagercrantz was a Project Assistant with the Tech Policy Program at the Center for European Policy Analysis (CEPA) in Brussels. She is currently an AI Governance Fellow at the Talos Network. She received a first-class bachelor’s degree and a master’s degree with distinction from Cambridge.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.