Insights AI tools in recruitment: ICO publishes recommendations

The Information Commissioner’s Office (“ICO”) has published a series of recommendations for organisations thinking of employing AI tools to help with recruitment.

As the ICO explains, AI tools are becoming increasingly popular as a way for organisations to identify and sift through potential candidates, from summarising and scoring applications and CVs, to evaluating a candidates’ language, tone and content in video interviews to predict their personality type.

However, the use of such tools brings risks from a data protection standpoint, risks that were borne out when the ICO conducted an audit of developers and providers of AI-powered sourcing, screening, and selection tools used in recruitment. The report of that audit (published here) identifies examples of some tools collecting more information than was needed, risks of discrimination, and the use of “vague or unclear” contracts that were deliberately broad and sought to “pass all responsibility for compliance to recruiters using their tool”.

Whilst the ICO noted many “encouraging practices”, it nonetheless has identified a series of key recommendations that both AI providers and recruiters should take into account:

  1. Fairness
  • AI providers and recruiters must ensure that they monitor for potential or actual “fairness, accuracy, or bias issues in the AI and its outputs”. Furthermore, if action needs to be taken to address any such issues, the ICO is clear that “accuracy being better than random is not enough to demonstrate that AI is processing personal information fairly”.
  • AI providers and recruiters must make sure that any special category data which is processed to monitor for bias and discriminatory outputs is “adequate and accurate enough to effectively fulfil this purpose”.
  1. Transparency and explainability
  • Recruiters must inform candidates how they will process their information using the relevant AI tool, either by providing detailed privacy information or ensuring that this is provided by the AI provider. It should explain: (a) what personal information is processed by AI and how; (b) the logic involved in making predictions or producing outputs; and (c) how the personal information is used for training, testing, or otherwise developing the AI.
  • AI providers should provide relevant technical information about the AI logic to the recruiter.
  • Both AI providers and recruiters should make sure that it is clearly stipulated in their contracts which party is responsible for providing privacy information to candidates.
  1. Data minimisation and purpose limitation
  • AI providers should “comprehensively assess”: (a) the minimum personal information they require to develop, train, test, and operate each element of the AI tool; (b) how long they need the data for; and (c) the purpose for the processing and its compatibility with the original purpose for processing.
  • Recruiters should ensure that they collect only the minimum amount of personal information to achieve the AI tool’s purpose, confirm that they are only processing the data for that purpose, and that the data will not be stored, shared, or reprocessed.
  1. Data Protection Impact Assessments (“DPIAs”)
  • AI providers and recruiters must complete a DPIA early in the development of the AI tool and prior to processing, updating it as the tool develops and when processing changes.
  • Any such DPIA should include, among other things: (a) a comprehensive assessment of privacy risks to people as a result of personal information processing; (b) appropriate mitigating controls to reduce these risks; and (c) an analysis of trade-offs between people’s privacy and other competing interests.
  1. Data controller and processor roles
  • AI providers and recruiters must define whether the AI provider is the controller, joint controller, or a processor for each specific processing of personal information, and record this clearly in contracts and privacy information.
  1. Explicit processing instructions
  • Recruiters must set “explicit and comprehensive written processing instructions for the AI provider to follow when processing personal information on its behalf as a processor”, including – for example – the specific data fields and output required, and minimum safeguards to protect personal information.
  • Recruiters should also check whether AI providers are complying with their instructions and are not sharing or processing personal information for additional purposes.
  • AI providers must follow the recruiters’ explicit instructions and not retain, share, or process personal information beyond their instructions.
  1. Lawful basis and additional condition
  • AI providers and recruiters must both identify the lawful basis on which they rely for each instance of processing where they are controller (and identify an additional condition where special category data is processed). The lawful basis and condition should be documented, described in privacy information, and recorded in contracts.
  • If ‘legitimate interests’ is relied upon, a ‘legitimate interests assessment’ should be completed.
  • If ‘consent’ is relied upon, they must “ensure that consent is specific, granular, has a clear opt-in, appropriately logged and refreshed at periodic intervals, and as easy to withdraw as it was to give”.

Finally, echoing the recommendations above, the ICO also provides a checklist of “key questions to ask before procuring an AI tool for recruitment”, as follows:

  1. Have you completed a DPIA?
  2. What is your lawful basis for processing personal information?
  3. Have you documented responsibilities and set clear processing instructions?
  4. Have you checked the provider has mitigated bias?
  5. Is the AI tool being used transparently?
  6. How will you limit unnecessary processing?

To read more, click here.