Authors: Carolyn BiggLauren Hurcombe and Yue Lin Lee.

On 18 July 2023, Singapore’s Personal Data Protection Commission (“PDPC”) issued for public consultation a set of proposed guidelines for the use of personal data in AI recommendation and decision systems (“Proposed Guidelines”). The public consultation is open until 31 August 2023.

The Proposed Guidelines aim to clarify the application of the Singapore Personal Data Protection Act (“PDPA”) in the context of developing and deploying AI systems involving the use of personal data for making recommendations or predictions for human decision-makers or autonomous decision-making.

Key takeaways for businesses:

  1. Exceptions to consent may apply: Under the PDPA, businesses are required to obtain consent for the collection and use of personal data unless deemed consent or an exception applies. The Proposed Guidelines clarify that the Business Improvement Exception may be applied by the organisation when it is either developing a new product, enhancing an existing one, or using an AI system to boost operational efficiency and offer personalised services. This also extends to data sharing within company groups for these purposes. Relevant applications include social media recommendation engines and AI systems enhancing product competitiveness.

In addition, the Research Exception may also be considered by the organisation when it conducts commercial research to advance science and engineering without a product development plan. This includes collaborative research with other companies. For the Research Exception to apply, several conditions must be met, including that data in individually identifiable form is essential for the research and there also needs to be a clear public benefit. However, it may be difficult for organisations to rely on this exception given there is traditionally a high threshold for a public benefit to accrue.

  1. Consent and Notification Obligations continue to apply: If relying on consent instead of an exception under the PDPA, organisations should craft consent language that enables individuals to give meaningful consent. The Proposed Guidelines highlight that the consent need not be overly technical or detailed, but should be proportionate having regard to the potential harm to the individual and the level of autonomy of the AI system. For example, a social media platform providing personalised content recommendations should explain why specific content is shown and the factors affecting the ranking of posts (e.g., past user interactions or group memberships).
  2. Navigating B2B AI deployments: Where businesses engage professional service providers to provide bespoke or fully customisable AI systems, such service providers may be acting as data intermediaries / data processors and are subject to obligations under the PDPA in relation to the protection and retention of personal data. To support businesses in meeting their consent, notification and accountability obligations, service providers should adopt practices such as pre-processing stage data mapping and labelling and maintaining training data records. Service providers should familiarise themselves with the information needed to meet their customer’s PDPA obligations and design systems to facilitate information extraction relevant to these obligations. In addition, organisations should undertake a data protection impact assessment when deploying, using or designing AI systems.

Our observations

The Proposed Guidelines build on the PDPC’s existing Model AI Governance Framework (PDPC | Singapore’s Approach to AI Governance) (first released in 2019 and updated in 2020), and are in line with Singapore’s pro-innovation, business-friendly approach in developing AI in a lawful but pragmatic way.

In recent months, the APAC region has seen a trend of businesses harnessing data to develop and deploy AI systems, fueled by pro-innovation and pro-collaboration regulations across the region, such as the new generative AI measures in China. While countries in the region are considering their unique approach in AI regulation, a common thread is the recognition of the pivotal role that data plays in powering AI solutions.

The Draft Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems may be accessed here.

To find out more on AI and AI laws and regulations, visit DLA Piper’s Focus on Artificial Intelligence page and Technology’s Legal Edge blog.

Please contact Carolyn Bigg (Partner), Lauren Hurcombe (Partner) or Yue Lin Lee (Senior Associate) if you have any questions or to see what this means for your organisation.