On 15th March 2023, the UK Information Commissioner’s Office (“ICO”) issued updated Guidance on Artificial Intelligence and Data Protection. The updated Guidance follows ‘requests from UK industry to clarify requirements for fairness in AI” and aims to support the UK government’s vision of a “pro-innovation approach to AI regulation” and more specifically its intention to “embed considerations of fairness into AI”.
The Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law as it applies to AI systems that process personal data. The updated Guidance has been restructured in line with the data protection principles, and includes new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems. The updated Guidance also introduces new definitions, including ‘affinity groups’, ‘algorithmic fairness constraints’ and ‘bias mitigation algorithm’, among others, to assist with clarifying data protection requirements when using AI systems.
We have summarised the key updates below:
- Accountability and Governance – new content has added to address the accountability and governance implications of AI and in particular, what organisations should consider when conducting a Data Protection Impact Assessment (“DPIA”) for the use of AI systems. The updated Guidance states that when conducting a DPIA, organisations should ensure that evidence is included to demonstrate “less risky alternatives” were considered and reasoning on why those alternatives were not chosen. When considering the impact of the processing on individuals, the Guidance also states that organisations must consider both allocative harms – i.e. harms resulting from a decision to allocate goods and opportunities among a group; and representational harms – i.e. harms occurring when systems reinforce the subordination of groups along identity lines.
- Transparency in AI – A new, standalone chapter has been added and is to be read in conjunction with the ICO’s existing Explaining Decisions Made with AI product. The new chapter contains high-level content on the transparency principle as it applies to AI, including, for example, confirmation that where data is collected directly from individuals, privacy information must be provided to those individuals before the data is used to train a model or apply that model on those individuals.
- Lawfulness in AI – A new chapter (which includes some old content – moved from the previous chapter titled ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’) has been added regarding lawfulness in AI. New sections have been included in this chapter, relating to AI and inferences, affinity groups and special category data. In relation to using AI systems to make inferences, the updated Guidance states that it may be possible to infer or guess details about someone which fall within special categories of data. Whether or not this counts as special category data and triggers Article 9 UK GPDR depends on how certain that inference is, and whether that inference is drawn deliberately. The inference is likely to be special category data if the use of AI results in the ability to infer relevant information about an individual or there is an intention to treat someone differently on the basis of the inference. In relation to affinity groups, the Guidance is clear that where an AI system involves making inferences about a group – creating ‘affinity groups’ – and linking these to a specific individual, then data protection law applies at multiple stages of the processing. This includes both at the development stage, involving processing of individuals’ personal data to train the model; and the deployment stage, where the results of the model are applied to other individuals that were not part of the training dataset on the basis of its predictive features.
- Fairness in AI – A new chapter (which includes some old content – moved from the previous chapter titled ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’) has been added regarding fairness in AI. The new content includes information on:
- Data protection’s approach to fairness, how it applies to AI and a non-exhaustive list of legal provisions to consider.
- The difference between fairness, algorithmic fairness, bias and discrimination.
- High level considerations when thinking about evaluating fairness and inherent trade-offs.
- Processing personal data for bias mitigation.
- Technical approaches to mitigate algorithmic bias.
- How are solely automated decision-making and relevant safeguards linked to fairness, and key questions to ask when considering Article 22 of the UK GDPR.
- Annex A: Fairness in the AI lifecycle – A new annex has been included, relating to data protection fairness considerations across the AI lifecycle, from problem formulation to decommissioning. It sets outs why aspects of building AI – such as underlying assumptions, abstractions used to model a problem, the selection of target variables or the tendency to over-rely on quantifiable proxies – may have an impact on fairness. This annex also explains the different sources of bias that can lead to unfairness, as well as possible mitigation measures.
The ICO has acknowledged the fast pace of technological development in AI and has stated that further updates to the Guidance will be required in the future. The ICO hopes that the new structure of the Guidance, with the data protection’s principles at its core, will assist with any future updates.
For further information, please contact your usual DLA Piper lawyer.