Insights AI Safety: New platform launched

Contact

The newly-formed AI Safety Institute has launched a testing platform to evaluate the safety of AI models. The platform – called ‘Inspect’ – is a software library which will allow those wishing to test AI models to assess specific capabilities across a range of areas, such as a model’s ability to reason or its autonomous capabilities. Inspect will then produce a score based on the results.

By making Inspect available globally and through an open source licence, the Government hopes that it will lead to “better safety testing and the development of more secure models…[allowing for] a consistent approach to AI safety evaluations around the world”.

The announcement also states that, in addition to the introduction of Inspect, the AI Safety Institute, Incubator for AI, and Number 10 will “bring together leading AI talent from a range of areas to rapidly test and develop new open-source AI safety tools”, with further details to be announced in due course.

Commenting on the new platform, the AI Safety Institute Chair, Ian Hogarth, said “I am proud that we are open sourcing our Inspect platform. Successful collaboration on AI safety testing means having a shared, accessible approach to evaluations, and we hope Inspect can be a building block for AI Safety Institutes, research organisations, and academia. We have been inspired by some of the leading open source AI developers – most notably projects like GPT-NeoX, OLMo or Pythia which all have publicly available training data and OSI-licensed training and evaluation code, model weights, and partially trained checkpoints. This is our effort to contribute back. We hope to see the global AI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open source platform so we can produce high-quality evaluations across the board.”

More details about the Inspect platform can be found here.