HomeInsightsAI and Copyright: A Closer Look at the UK Government’s Report and Impact Assessment

Following our previous article, on 18 March 2026, the government published its long-awaited report on copyright and artificial intelligence (the “Report”). Published jointly by the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport, the Report sets out the government’s current thinking on the relationship between copyright law and the development of AI systems. It draws upon over 11,500 responses to the December 2024 consultation and subsequent stakeholder engagement. Our initial summary of the contents of the Report can be found here. This article seeks to explore in further details some of the topics covered by the Report.

The Report does not propose any immediate changes to UK copyright law to address the use of copyright works to train AI models. Instead, the government has committed to gathering further evidence, monitoring international developments, and continuing to engage with industry stakeholders before reaching any firm conclusions. The headline message is one of deliberate caution; as the Report states, the government “must take the time needed to get this right” and “will not introduce reforms to copyright law until we are confident that they will meet our objectives for the economy and UK citizens“.

Alongside the Report, the government has published an Impact Assessment examining the potential economic effects of the policy options set out in the December consultation. However, the Report itself recognises that there is “limited and uncertain evidence on the impact of copyright on the development and deployment of AI in the UK“.

No Copyright Exception for AI Training… For Now

By way of reminder, the December 2024 consultation proposed the following four options for the future of copyright and AI policy in the UK.

             Option 0 – Do nothing / maintain the status quo

             Option 1 – Strengthen copyright by requiring licensing in all cases

             Option 2 – Introduce a broad text and data mining exception

             Option 3 – Introduce a text and data mining exception with an opt-out

At the time the consultation was published, the UK government’s preferred option was Option 3: a data mining exception with opt-out, broadly modelled on Article 4 of the EU’s Digital Single Market Directive. However, following support from only 3% of consultation respondents (with creatives expressing concern that this type of broad exception would be “significantly burdensome”, and AI developers predicting this approach would lead to a high number of opt-outs which would undermine its purpose), the government has moved away from this approach and instead proposes to gather further evidence on how copyright law is affecting the development and deployment of AI across the UK economy, and to consider a range of potential approaches, including more targeted or focused exceptions. Respondents to the consultation had proposed a number of alternative approaches, including an exception focused on science and research, which would permit data mining, but only for the purposes of science and research. The government acknowledged the merits of such ideas and committed to giving them further consideration. For now, no changes are proposed, and so Option 0 – do nothing / maintain the status quo continues.

Models Trained Overseas

One of the most practically significant issues addressed in the Report is the copyright status of AI models trained outside the UK. The Report refers to the High Court’s judgment in Getty Images v Stability AI [2025] EWHC 2863 (Ch) (our summary of which can be found here), which found that an AI model could, in principle, constitute an “article” for the purposes of secondary copyright infringement under section 22 of the CDPA, but that Stability AI’s Stable Diffusion model (as an ‘article’) did not constitute an ‘infringing copy’ as the model did not store copies of any of the copyright works it was trained on. The Report notes that this is contrary to the German court decision in GEMA v OpenAI, in which the court found that the relevant large language model did retain copies of the song lyrics it had been trained on, suggesting a lack of international harmonisation and illustrating the fact-specific nature of outcomes in this area. Given that the Getty judgment is currently subject to appeal, the government has concluded that it is “right that the application of the law in this area should be considered properly by the courts, to give certainty to right holders, developers and users of AI models”. Accordingly, the government does not propose to amend copyright law in respect of AI systems developed outside the UK at this time.

Transparency

Input transparency was the single issue which commanded the greatest consensus in the consultation, with over 90% of respondents agreeing that AI developers should disclose the content used to train or develop AI models. The creative industries strongly supported mandatory transparency standards, while technology companies supported higher-level, and industry-led voluntary disclosure. The Report notes that there is currently no requirement in UK law for an AI developer or provider to disclose information about the copyright works used to train or develop a model. The Report discusses the approaches adopted in the EU AI Act, which requires the disclosure of summaries of training material, and California’s Generative Artificial Intelligence Training Data Transparency Act which contains similar obligations. In response, the government proposes to work with industry and experts to develop best practice on input transparency, which would then inform any future potential legislation.

On output transparency (the labelling of AI-generated content), the government similarly proposes to work with industry to explore best practice.

Licensing

Option 1 in the consultation was to strengthen copyright by introducing a licensing requirement. Whilst this option was supported by 81% of respondents, the consensus from the consultation responses (from both the creative sector and AI developers) was that the government should not intervene in licensing arrangements, which should remain a matter of commercial negotiation between the relevant parties. The government proposes to monitor the market as it develops, by working with industry experts to identify best practices. The Report also draws attention to the government-supported Creative Content Exchange pilot, announced on 26 January 2026, which is intended to function as a trusted marketplace for the licensing of digitised cultural and creative assets, with an operational pilot platform expected by summer 2026.

Enforcement

The Report acknowledges that AI may pose new practical enforcement challenges, but states that the UK’s framework is capable of adapting to developments in AI. The key challenges to the successful enforcement of copyright identified by the consultation are the lack of transparency over training data, and the issue of AI models being developed outside the UK. Both of these issues have been addressed above. The government proposes to continue working with law enforcement, industry and the judiciary to ensure the enforcement framework remains fit for purpose, and to carry out further work to identify and address barriers to enforcement.

The government also confirms that it does not propose to create a new regulator specifically to oversee matters of AI and copyright, and will not impose new regulatory duties on existing regulators at this time.

Computer-Generated Works

The Report addresses the UK’s existing copyright protection for computer-generated works where there is no human author, which are protected under section 9(3) CDPA. Since the consultation responses showed minimal evidence that this form of protection is being actively used, and those respondents who answered the questions relating to computer generated works favoured the removal of protection, the government proposes that this protection should be removed, and that copyright should continue to “incentivise and protect human creativity”.

Digital Replicas

Finally, the Report considers the growing use of AI to create digital replicas (the replication or mimicry of an individual’s voice or likeness – often called deepfakes). The government acknowledges that AI has made “convincing, low-cost digital replicas easy to create and disseminate at scale”, and that existing UK law provides only a fragmented patchwork of protections which do not give most individuals meaningful control over their image or voice. The government proposes to explore a range of options to address these risks, including considering whether it would be beneficial to introduce a new digital replica or personality right. Any such legislation would require careful consideration, as the introduction of a personality right in particular could have a far reaching impacts, beyond addressing the risk posed by using AI to create digital replicas or deepfakes.

Conclusion

The Report represents a significant step in the government’s ongoing engagement with the challenge of copyright and AI. Notably, the government has stepped back from its previously preferred approaches and committed to a further evidence-gathering process before any legislative reform is introduced. The government continues to try to balance the competing interests of the UK’s creative industries whilst ensuring the UK remains a leading jurisdiction for AI and technology investment. This remains an uncertain and fast-moving area of law. We will be following developments closely, including the appeal to the Court of Appeal in Getty v Stability AI.