Organisations are increasingly turning to AI-enabled tools throughout the recruitment lifecycle, from CV filtering and suitability scoring to online assessments and behavioural analysis. These tools can offer real advantages, including faster hiring processes and the potential to reduce human bias that inevitably exists in traditional recruitment. However, their use often creates a tension with data protection principles that restrict decision-making based solely on automated processing.

On 31 March 2026, the Information Commissioner’s Office published a report and draft guidance on the use of automated decision-making in recruitment. The ICO’s new report draws on evidence gathered from over 30 employers, as well as “public perceptions research” capturing views from graduates, civil society, government, trade unions, and industry bodies.

One of the key headlines identified by the ICO is that many employers do not acknowledge that they are carrying out ADM. As a result, employers fail to ensure that sufficient safeguards are in place – such as transparency, bias monitoring, accountability, and data subject rights. The message from the ICO is clear: it expects employers to follow the guidance set out in the report, and this should be treated as a strong signal that enforcement action may follow where organisations fall short.

Summary of the ICO’s key findings:

  • The ICO concludes in the report that although most employers thought their use of automated recruitment tools constituted decision support rather than decision-making, evidence showed that, in practice, employers were using the tools to make solely automated decisions and there was no meaningful human involvement. The ICO is clear that human involvement must be meaningful and active. It cannot be a token gesture or a rubber stamp of an automated outcome. The test is whether a human can exercise real influence over a decision before it is applied and has the authority, discretion and competence to alter it. Where that standard is not met, the process is treated as solely automated, regardless of whether a person nominally sits in the chain.
  • The new Data (Use and Access) Act (DUAA) represents a welcome development for employers navigating automated decision-making in recruitment. Under the previous framework established by Article 22 of the UK GDPR, automated decision-making was framed as a general prohibition subject to narrow exceptions. The DUAA reframes this as a right of challenge with safeguards, giving employers a greater opportunity to use automated tools in recruitment decisions, provided that appropriate protections are in place. It is important to note, however, that where special category data is in scope, the previous, more restrictive rules continue to apply.
  • The ICO sets out two paths for employers:

    • accept that there is no meaningful human involvement in the process, acknowledge that the organisation is carrying out automated decision-making, and adopt the required safeguards (see below); or
    • implement processes that ensure meaningful human involvement in each decision about each candidate. Given that this is a high bar, particularly for organisations processing large volumes of applications, many employers will in practice need to follow the first path.
  • For those employers carrying out solely automated decision-making, the ICO’s report makes clear that a number of steps are required:

    • employers must establish a lawful basis for the processing. The DUAA helpfully removes the previous restriction that limited available lawful bases to consent or contractual necessity in a recruitment context (provided no special category data is involved), opening the door to reliance on legitimate interests;
    • employers must provide meaningful transparency at the right time, clearly explaining the logic involved in the automated processing and its likely consequences. A brief reference buried within a general privacy notice is unlikely to meet this standard;
    • employers must adopt appropriate safeguards. Candidates must be informed about the automated processing, given an opportunity to make representations, be able to obtain human review of a decision, and be able to contest the outcome;
    • employers should conduct fairness testing and bias reviews, including interrogating the developers about their own bias testing as part of procurement; conduct trials to check that results limit bias; and engage in ongoing monitoring of their tools and outputs. The ICO expects employers should also transparently provide information on the accuracy and performance of the tools they are using; and
    • employers must carry out data protection impact assessments (DPIAs). The ICO found that many existing DPIAs lacked the detail and specificity needed to comply with the law.

What this means for organisations using ADM in recruitment

Organisations should be reviewing adoption of AI tools in recruitment to assess whether processes involve solely automated decision-making, for example where suitability scoring, CV filtering, or behavioural assessments are in use.

DLA Piper supports clients on compliant implementation of ADM tools in recruitment, including assisting with: the development of due diligence protocols to properly assess developer tools for fairness and bias; drafting transparency information that meets the ICO’s expectations at each stage of the recruitment lifecycle; and conducting robust DPIAs and legitimate interests assessments that provide the specificity the ICO demands.

DLA Piper also assists clients with the development of processes for handling candidate objections and requests for human review. In an environment where awareness of data subject rights is growing and individuals are increasingly willing to exercise those rights, often using AI-generated requests, this is an area that should not be overlooked. The ICO’s guidance is notably light on the operational detail of how to manage these requests in practice, and this is an area where practical legal support is particularly valuable.

Finally, for clients operating across multiple jurisdictions, DLA Piper leverages its global network of legal experts, to help align approaches to automated decision-making, for example, across the UK and EU, where the requirements differ. In the EU, not only the GDPR but also the AI Act will need to be considered, and the interaction between these two frameworks adds a further layer of complexity.