Insights AI in Arbitration: Chartered Institute of Arbitrators publishes guidance

Contact

The Chartered Institute of Arbitrators (Ciarb) has published a Guideline on the use of AI in arbitration.

The use of AI in legal proceedings has been a subject of considerable debate ever since sophisticated large language models (LLMs) emerged. The potential benefits of the technology are well-rehearsed: AI could assist in everything from data analysis and research to translation and transcription. Even some judges have spoken enthusiastically about its ability to help them with the drafting of judgments.

However, concerns about potential risks are just as numerous, including worries about matters such as confidentiality, data security, and bias. That is not to mention the phenomenon of hallucinations and AI models churning out incorrect information, something that occurred in a now notorious case in New York in which Chat GPT was used to help prepare a brief and proceeded to cite a number of made-up cases.

Attempts to develop a set of guidelines on the use of AI within the legal sector and in proceedings have largely remained elusive, perhaps because the speed of the technological advances means that any such guidelines could be quickly rendered obsolete. That said, guidance was issued for judicial office holders on the “responsible use of AI in Courts and Tribunals” which did not prohibit the use of LLMs, but offered words of advice if they were to be employed, and cautioned that ultimately judges would be personally responsible for the material that they produced in their name. Similarly, the Law Society and Bar Standards Board have produced guides on the use of generative AI tools in practice.

In the world of arbitration, it was unsurprisingly the Silicon Valley Arbitration and Mediation Centre that produced the first set of guidelines that sought to establish a uniform set of standards on the use of AI in international arbitration, addressing “both current and future applications of artificial intelligence from a principled framework, while also bearing in mind that the technology will continue to evolve rapidly”.

The Ciarb has now published its own Guideline which aims to “give guidance on the use of AI in a manner that allows dispute resolvers, parties, their representatives, and other participants to take advantage of the benefits of AI, while supporting practical efforts to mitigate some of the risk to the integrity of the process, any party’s procedural rights, and the enforceability of any ensuing award or settlement agreement”.

Like other such guidance, Ciarb points out that this Guideline does not “supersede, any applicable laws, regulations or policies, or institutional rules related to the use of AI in an arbitration”. Nevertheless, it provides a series of recommendations about how AI could be integrated into an arbitration, whilst ensuring that the potential risks are understood and mitigated against. For example, it states that arbitrators may regulate the use of AI by parties “with a view to preserve the integrity of arbitral proceedings” and should “ascertain whether and how the parties provided for the use of AI in their arbitration agreement”. Guidance is also provided on ruling on the use and admissibility of AI-generated material, how AI is used in the disclosure process, and how arbitrators themselves may use AI.

The Guideline is also accompanied by both a template agreement and a procedural order on the use of AI in arbitration.

To read the Guideline in full, click here.