The UK ICO has published its AI and data protection risk toolkit (the “Toolkit“). The Toolkit is designed to provide practical support to organisations using AI systems which may involve the processing of personal data. It builds on the ICO’s earlier guidance on AI and data protection, published in July 2020.
The ICO recognises there can be significant risks to the rights and freedoms of individuals where AI systems make use of personal data. The Toolkit is designed to help organisations identify and manage (mitigate) potential issues, to allow AI systems to be designed and operated in a way which is consistent with the principles in the UK GDPR, fully recognising there is no ‘one size’ approach, as risks in relation to AI are “heavily context-dependant, and vary significantly across the diverse range of sectors, technologies and organisation types covered by data protection legislation.”
Summary of key points:
- The Toolkit is divided into high-level lifecycles, providing a helpful guide as to the risks and suggested controls that should be considered at each stage in the AI lifecycle.
- Each risk area is aligned to the key principles contained in the UK GDPR (i.e., accountability, data minimisation, fairness, transparency, purpose limitation etc.).
- Practical steps are provided to explain both the mandatory and best practice steps required to reduce risks to fundamental rights and freedoms and increase the likelihood of compliance with data protection law. For each risk area, the Toolkit provides additional cells allowing organisations to include a summary of their assessment of the risk and describe any practical steps that will be taken to reduce risks.
- Once the risks have been identified, the Toolkit allows organisations to select an inherent risk rating of ‘high’, ‘medium’ or ‘low’. After the suggested practical steps have been taken, organisations can then re-assess the risk and include a residual risk rating. As the assessment of risks will vary depending on the context, the Toolkit requires organisations to undertake their own assessments of the risks identified.
- The identified risks and mitigated steps should be documented to help demonstrate compliance with the legislation. In its webinar accompanying the Toolkit, the ICO confirmed that organisations are under no obligation to make the results of the Toolkit available to data subjects. However, the ICO suggested that publishing “a version of the risk Toolkit” will assist in demonstrating a commitment to transparency.
- The ICO is clear that the Toolkit does not replace the requirement to carry out a Data Protection Impact Assessment (DPIA). The aim of the Toolkit – which focusses on AI specific risks – is to compliment DPIAs, by helping organisations to identify those risks and steps to mitigate them and incorporate these into the DPIA.
- Using the Toolkit is optional and the ICO has confirmed that organisations will not be penalised for not using it.
Having previously been identified by the ICO as one of its “top three strategic priorities”, and the continuing legislative developments in this area, it is clear that there will be continued focus on AI by the ICO.
Although the Toolkit is primarily focussed on AI based on machine learning, the ICO has stated that it intends to expand its scope in later versions. Given the attention this area of compliance continues to attract from regulators, the Toolkit provides some helpful practical steps that businesses can take to mitigate data protection risks when using AI systems.
For further information and advice, please get in touch with your usual DLA Piper contact.
#PracticalGlobalPrivacy