Insights 28 countries agree AI declaration at UK AI safety summit

The written Declaration, agreed during the UK AI Safety Summit held on 1-2 November 2023, begins by acknowledging the enormous opportunities offered by AI which is already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility and justice. However, it states that AI poses “significant risks” and therefore must be designed, developed, deployed and used in a manner that is safe, human-centric, trustworthy and responsible. In particular, in developing AI, it is necessary to address the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, human oversight, ethics, bias mitigation, privacy and data protection.

The Declaration applies specifically to ‘frontier’ AI, understood as being those highly capable general-purpose AI models, including foundation models (generally understood as an AI model trained on very large amounts of data), that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models. There is potential for “serious, even catastrophic, harm”, deliberate or unintentional, stemming from the most significant capabilities of these AI models and, given the accelerated investment being made in the technology, action is urgently needed.

As many AI risks are international in nature, the countries agree to work through existing international fora and other relevant initiatives, including future AI Safety Summits, to promote cooperation, recognising the importance of a pro-innovation and proportionate governance and regulatory approach that considers benefits as well as risks. Collaboration could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. The countries also resolve to intensify and sustain their cooperation and broaden it to further countries, and endeavour to engage and involve a broad range of partners, including nations, international fora and other initiatives, companies, civil society and academia, who all have a role to play in ensuring safe AI.

The key focus will be on identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of the risks, and building risk-based policies across countries to ensure safety in light of such risks, collaborating as appropriate while also recognising that approaches may differ based on national circumstances and applicable legal frameworks. To further these aims, the countries have agreed to support an internationally inclusive network of scientific research on frontier AI safety, complementing existing collaborations, to facilitate the provision of the best science available for policy making and the public good.

For more information, click here.