Insights Information Commissioner’s Office updates Guidance on “Artificial Intelligence (AI) and Data Protection”

The ICO says that its Guidance on AI and Data Protection has been updated after requests from UK industry to clarify requirements for fairness in AI. It also delivers on a key ICO25 commitment, which is to help organisations adopt new technologies while protecting people and vulnerable groups.

The ICO says that the update supports the Government’s vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of fairness into AI. The ICO supports the Government’s mission to ensure that the UK’s regulatory regime keeps pace with and responds to new challenges and opportunities presented by AI.

For ease of use and given the foundational nature of data protection principles, the ICO has restructured the Guidance, moving some of the existing content into new chapters:

  • What are the accountability and governance implications of AI? — the ICO has produced new content on things to consider when undertaking a Data Protection Impact Assessment;
  • How do we ensure transparency in AI? — this is a new chapter with new content on the transparency principle as it applies to AI; the main guidance on transparency is contained in the ICO’s existing Guidance on “Explaining decisions made with AI”;
  • How do we ensure lawfulness in AI? — this is a new chapter with old content moved from the previous chapter entitled: What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?, and new content on AI and inferences, affinity groups and special category data;
  • What do we need to know about accuracy and statistical accuracy? — this is new chapter with old content; following the restructuring of the data protection principles, the statistical accuracy content, which used to be in the chapter entitled “What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?”, has moved into a new chapter that focuses on the accuracy principle;
  • Fairness in AI — this is a new chapter with new and old content; the old content comes from the chapter previously entitled What do we need to do to ensure lawfulness, fairness, and transparency in AI systems? and the new content includes information on:
    • data protection’s approach to fairness, how it applies to AI and a non-exhaustive list of legal provisions to consider;
    • the difference between fairness, algorithmic fairness, bias and discrimination;
    • high level considerations when thinking about evaluating fairness and inherent trade-offs;
    • processing personal data for bias mitigation;
    • technical approaches to mitigate algorithmic bias; and
    • how are solely automated decision-making and relevant safeguards linked to fairness, including key questions to ask when considering Article 22 of the UK GDPR; and
  • Annex A: Fairness in the AI lifecycle — this is a new chapter with new content; this section concerns data protection fairness considerations across the AI lifecycle, from problem formulation to decommissioning; it sets outs why the fundamental aspects of building AI, such as underlying assumptions, abstractions used to model a problem, the selection of target variables or the tendency to over-rely on quantifiable proxies, may have an impact on fairness; the chapter also explains the different sources of bias that can lead to unfairness and possible mitigation measures; technical terms are explained in the updated glossary; and
  • Glossary — this is an old chapter with old and new content; new additions include definitions of affinity groups, algorithmic fairness, algorithmic fairness constraints, bias mitigation algorithm, causality, confidence interval, correlation, cost function, dataset labellers, decision, construct and observed space, decision boundary, decision tree, downstream effects, ground truth, inductive bias, in-processing, hyperparameters, multi-criteria optimisation, objective function, post-processing bias mitigation, regularisation, redundant encodings, reward function, use case, target variable and variance.

Acknowledging the fast pace of technological development, the ICO believes more updates will be required in the future. To access the updated Guidance, click here.