Click to open contact form.
Your Global Partners in the Business of Innovation

CNIL Publishes New Recommendations for AI System Development and GDPR Compliance

Client Updates / July 29, 2024

Written by: Haim Ravia and Dotan Hammer

The French Data Protection Authority (CNIL) has released its second series of recommendations for AI system developers, emphasizing GDPR compliance. Building on the initial guidance issued last month, this new series aims to balance innovation with respect for individual rights.

The new recommendations cover the following areas –

  • Legitimate interest is the primary legal basis for AI development, necessitating risk assessments and protective measures for individuals’ data. This includes scientific research, system enhancements, and product improvements. CNIL also provides guidelines to ensure these activities do not infringe on individual rights, such as in web scraping or open-source AI models.
  • Open-source software poses risks like malicious use and security issues. CNIL recommends incorporating these risks into the legitimate interest assessment, ensuring transparency, using restrictive licenses, implementing technical security measures, and protecting data subjects’ rights and information.
  • When using web scraping for AI, companies must follow principles of data minimization, predefined collection criteria, and data filtering. Safeguards should include avoiding intrusive sites, maintaining a “push-back list,” allowing objections, and employing anonymization. CNIL suggests creating a central register of companies using scraping tools to inform data subjects and enable the exercise of their GDPR rights.
  • Companies must inform data subjects about the use of their personal data for AI, including model-specific details, within a “reasonable period” before training. This should ensure accessibility, note exceptions, and adhere to transparency practices like publishing Data Protection Impact Assessments (DPIAs) and documentation.
  • Handling data subject rights regarding training datasets and AI models includes providing access, data copies, rectification, opposition, and erasure. Companies should address challenges such as identifying data subjects and retraining models while noting applicable exceptions.
  • Data annotation practices should minimize the impact on the rights and freedoms of individuals, ensure accuracy, maintain quality through protocols and ethical oversight, inform individuals, enable their rights, and consider sensitive data.
  • CNIL outlines a methodology for managing AI development security, including key objectives, AI-specific risk factors, and measures to achieve acceptable residual risk.

Additionally, CNIL has released a questionnaire to gather stakeholder input on the application of GDPR to AI models. This seeks to clarify when AI models are considered anonymous or must comply with GDPR and the implications of these classifications.

Click here to read CNIL’s second series of how-to sheets and complete the questionnaire on the development of artificial intelligence systems.

Click here to read CNIL’s initial recommendations on the development of artificial intelligence systems.

MEDIA HIGHLIGHTS