HomeInsightsA.I. and the Law: Businesses and Investors Can No Longer Plead Ignorance

Contact

I am a lawyer. Give me a set of rules. Give me a problem. Understand the problem. Apply the rules to the facts. Understand the client, the business, the industry, and the technology. Anticipate challenges, be commercially minded, adapt communication style, and deliver. I am good at it, and I enjoy it.

But A.I.? Advising clients on A.I. is uniquely perplexing.

It’s not the technology. I have a technical background; an engineer-turned-lawyer, with a history in patent litigation, and expertise spanning software, copyright, open source and cybersecurity. I need to fully understand something before I feel qualified to advise on it. Clients pay for expertise.  Call me old-fashioned.

To understand A.I. at a technical level, particularly generative A.I., I have Stephen Wolfram to thank – his exposition on large language models and concepts such as variational autoencoders (VAEs) was an invaluable primer. And this level of detail is necessary.  For example, one core question is whether a trained model “contains” a copy of the works on which it was trained. To unravel that issue requires grappling with what the technology is actually doing. But this only gets you so far.  Technology evolves rapidly—six months later, we’re discussing inference, adapters, grounding, and multi-hopping. Awareness is good, but clients need more from their advisors than soundbites on LinkedIn.  They need rigorous research, deep technical understanding and then thoughtful legal analysis.

The Legal Landscape: A Moving Target

In the UK, is the scope of what is and is not copyright infringement clear when it comes to A.I. systems? Many don’t agree on the answer. There is a text and data mining exception to copyright infringement which might, in theory, apply to A.I. training. However, it is narrow in that it does not apply to commercial activities.  Other exceptions exist, but their application is similarly limited and depends on the context in which the work is to be used (e.g. in education).

In the EU there is recent change in the guise of the EU A.I. Act and there is also the Digital Copyright Directive, which introduced text and data mining exceptions and where copyright holders must pro-actively ‘opt-out’ of the exception if they want to protect their rights.  However, the technical implementation of such an opt-out mechanism is far from clear.  Similarly there are transparency requirements in the EU, the scope of which are also unclear.  The UK government initially considered adopting a broad text and data mining exception, then back-tracked, and is now reconsidering an EU-style opt-out in its recently-closed consultation, but many of the same concerns that are live in the EU would apply equally in the UK.

Separately there is the UK’s draft Data (Use and Access) Bill which is still being debated and has proposed rather more stringent copyright protections from an extra-jurisdictional and transparency perspective. Some argue that collective licensing models are a more balanced solution, though they come with their own challenges, for example, agreeing on the circumstances in which rights holders must use them.

The Political and Commercial Overlay

A.I. regulation is not just a legal issue—it’s a political one too. The UK government presently aims to attract A.I. developers, but what impact might that have on the country’s creative industries? On the same day the UK closed its consultation, it announced a delay in A.I. regulation to align with the Trump administration’s stance. Meanwhile, in parallel many in the creative sector voiced their opposition on the front pages of UK newspapers.  Tech companies, of course, have their own priorities.

The US has been the origin for most of the global LLMs and foundation models.  It is also, therefore, home to most of the litigation, with Meta, Microsoft, OpenA.I., Google and Nvidia all subject to claims of copyright infringement from various copyright holders, from authors to media companies, in any number of creative industries, from music and publishing to software. There is a school of thought that A.I. developer companies should be able to rely on a US defence to copyright infringement called fair dealing, though recent case law might call that into question.  Whether fair dealing is available as an effective defence, and how far such a defence would extend, is far from being settled.

High profile A.I. and copyright litigation is also playing out in the UK courts. Getty Images is suing Stability A.I. in both the US and UK for copyright infringement because of how the Stability Diffusion product has used its images. It’s an important case and addresses whether specific acts are problematic (such as training the A.I. system and generating the outputs), and whether the A.I. system itself can be considered an infringement.  It raises commercialisation issues, such as who has downloaded what, where and what has technically happened and where.  Getty feels aggrieved and Stability A.I. justifies its actions with various arguments, including in respective of jurisdiction and some novel points involving exceptions under UK legislation, such as for “pastiche”.  Whatever the outcome of the trial in June, this case is likely to be appealed and perhaps over the course of a number of years.  Ultimately, though, it could materially shape what aspects of A.I. systems are legal in the UK.

Conclusion: A High-Stakes Landscape

There is significant global uncertainty surrounding the legality of A.I. systems, at least from a copyright perspective.  And there are implications for law, politics, and technology.  And that is before one addresses A.I. issues relating to data privacy compliance, itself another complex topic and one which is firmly on the radar of global regulators.  And all this uncertainty is unlikely to be resolved anytime soon.

From my own experience, it seems many start-ups and entrepreneurs are forging ahead without fully appreciating what could be existential legal risks to their businesses. Astute investors will be increasingly aware of these risks and wary of business models that fail to account for legal uncertainties or try to mitigate risks.  By contrast, businesses seeking investment would do well to be informed, proactively address these risks head on, and seize the opportunity to use legal foresight as a competitive advantage over those who are less-prepared.

In A.I., as in law, ignorance is no defence.