Insights AI in the Public Sector: Private Members’ Bill introduced

A Private Members’ Bill has been introduced in the House of Lords which aims to regulate the use of AI in decision-making processes in the public sector.

The ‘Public Authority Algorithmic and Automated Decision-Making Bill’, introduced by Lord Clement-Jones, has a declared purpose of ensuring that “algorithmic and automated decision-making systems are deployed in a manner that accounts for and mitigates risks to individuals, public authorities, groups and society as a whole, and leads to efficient, fair, accurate, consistent, and interpretable decisions; and to make provision for an independent dispute resolution service”.

Under the Bill, public authorities would be required to complete an ‘Algorithmic Impact Assessment’ prior to the deployment of an algorithmic or automated decision-making system. Such assessments would include the setting out of: (a) a detailed description of the system; (b) the relative benefits and risks of the system “including the risks to the privacy and security of personal information, risks to the safety of a service user or group of service users, and risks and likely impacts on employees of public authorities”; (c) an explanation of the steps being taken to minimise these risks; (d) independent external scrutiny of the efficacy and accuracy of the system; and (e) a mandatory bias assessment to ensure that the system complies with the Equality Act 2010 and Human Rights Act 1998.

The Bill also contemplates public authorities completing an ‘Algorithmic Transparency Record’ which would include, among other things:

  1. a detailed description of the algorithmic or automated decision-making system;
  2. an explanation of the rationale for using the system;
  3. information on the technical specifications of the system;
  4. an explanation of how the system is used to inform administrative decisions concerning a service user or group of service users; and
  5. information on human oversight of the system.

In an attempt to address the so-called ‘black box’ problem, the Bill also stipulates that all systems must be “designed with logging capabilities enabling the automatic recording of events during operation” and that no public authority should deploy a system which is “incapable of scrutiny”. This includes systems where there are “practical barriers, including contractual or technical measures and intellectual property interests, limiting their effective assessment or monitoring of the algorithmic or automated decision-making system in relation to individual outputs or aggregate performance”.

Finally, the Bill provides for public sector employees to have to receive training in the design, function, and risks of the system in order to explain and oversee its operations. Those operating the systems and applying the final decision must also have the authority and competence to challenge the system’s output, and the Bill anticipates the creation of a dispute resolution service which would enable the decisions – or class of decisions – made by systems to be challenged.

To read the Bill in full, click here.