August 18, 2025
The Law Commission has published a discussion paper on AI and the law, examining the challenges posed by the growth of artificial intelligence and the areas of the law that might require reform in response.
The paper is intentionally pitched at a high level, exploring not only how AI works, but also the various ways in which its increasing sophistication could present challenges for our existing legal frameworks.
In particular, the paper considers the consequences of AI becoming increasingly autonomous and adaptive (especially with the rise of AI agents). Against this background, it points to the possibility of so-called ‘liability gaps’ emerging where “no natural or legal person is liable for the harms caused by, or the other conduct of, an AI system”. Complex supply chains only serve to complicate matters further, but even if a person were identified as potentially liable, the Law Commission explores the challenges, for example, of establishing causation, or proving the requisite knowledge that is required to establish liability in many causes of action.
Further still, there is the question of what the paper calls ‘opacity’, also known as the ‘black box problem’, which describes the phenomenon of not knowing how or why AI systems produce certain outputs. This could be particularly problematic in the context of public law, where decision-makers are required to follow proper procedures and show that only relevant factors have been taken into account when reaching a decision. As the Commission explains, if there is no way of knowing what factors went into the AI system reaching its decision, there is no way of knowing whether some were irrelevant.
The paper continues by exploring challenges associated with the oversight and reliance on AI, as well as with training data, before posing the question of whether, in light of all of these challenges, it is worth pursuing what it calls the ‘radical’ option of granting AI systems some form of legal personality. As the Law Commission explains, this has the advantage of filling the liability gaps, whilst “potentially encouraging AI innovation and research by granting AI developers separation in terms of liability”. Of course, the counter-argument, which the paper acknowledges, is that AI systems may become ‘liability shields’, protecting developers from reasonable accountability. There are also further challenges, such as the type of legal personality that should be granted and, as the Commission puts it, “the complexity of granting AI the ability to hold funds and assets such that they can be held meaningfully accountable”. Whilst the paper is clear that we may not be at this stage yet, the Commission states that “the option of granting some AI legal systems legal personality is likely increasingly to be considered”, so watch this space.
To read the paper in full, click here.
Expertise