Do AI platforms comply with the ‘Right to be forgotten’?

The right to be forgotten is a key rule with the EU’s GDPR regulations. How will that rule apply to new AI systems?

The right to be forgotten, as outlined in Article 17 of the EU General Data Protection Regulation (GDPR), poses significant challenges for AI systems.

This right allows individuals to request the deletion of their personal data from databases and to stop the dissemination of such information. However, when it comes to AI, the question arises about what happens to the AI models that have been trained on the data that is now subject to erasure requests.

AI systems, particularly those using machine learning, rely on large datasets to train algorithms and make decisions. The deletion of data upon request can impact the performance and decision-making rules of these AI systems. If many individuals exercise their right to erasure, it could lead to a shift in the AI software's rules.

Moreover, AI's "black box" nature complicates compliance with the right to be forgotten. It is often not clear how AI systems arrive at certain outputs based on the input data, making it difficult to ensure that all traces of an individual's data are removed.

Additionally, AI can create new personal data by combining existing datasets, which raises further questions about how to handle such inferred data under the right to be forgotten.

To address these challenges, some approaches include data minimization, where only the necessary data is collected, and pseudonymisation, where data is processed in a way that it can no longer be attributed to a specific data subject without additional information.

Another method involves adding "noise" to data to preserve anonymity, although this can affect the accuracy of AI models.

The GDPR's impact on AI is not limited to the right to be forgotten. It also includes principles such as data minimisation and purpose limitation, which can be at odds with the data-hungry nature of AI systems.

Companies must ensure that they have a lawful basis for processing personal data, provide transparency about their data practices, and implement safeguards to protect data subjects' rights.

In conclusion, the right to be forgotten under the GDPR presents a complex challenge for AI systems. It requires careful consideration of how personal data is collected, processed, and deleted, as well as the development of AI models that can adapt to the potential loss of data due to erasure requests.

What are the challenges of implementing the right to be forgotten in AI systems?

Implementing the right to be forgotten in AI systems presents several challenges:

  1. Data Erasure: AI systems, especially those using machine learning, are trained on large datasets that may include personal data. When an individual exercises their right to be forgotten, it is not always straightforward to remove their data from the model without retraining it, which can be costly and time-consuming.

  2. Model Retraining: If significant amounts of data are erased due to multiple right to be forgotten requests, it could potentially alter the AI model's decision-making rules. Retraining models to adapt to the loss of data can be complex and resource-intensive.

  3. Black Box Nature: AI systems are often seen as "black boxes" with non-transparent decision-making processes. This makes it difficult to ensure that all traces of an individual's data are removed, as required by the right to be forgotten.

  4. Data Minimization and Pseudonymization: While these techniques can help with compliance, they may not be foolproof. Pseudonymization can protect identity but may not be sufficient if the underlying data can still be linked to an individual. Data minimisation may affect the performance of AI models.

  5. Legal and Technical Misalignment: Current laws may not fully account for the complexities of AI technology. The GDPR, for example, was not designed with AI's data-intensive nature in mind, leading to potential misalignments between legal requirements and technical capabilities.

  6. Interdisciplinary Challenges: Addressing the right to be forgotten in AI may require interdisciplinary research across fields such as law, cognitive science, and computer science to develop a more nuanced understanding of "forgetting" in the context of AI.

  7. Global Data Governance: The international nature of data processing and the varying privacy laws across jurisdictions add complexity to implementing the right to be forgotten in AI systems.

  8. Ethical and Bias Considerations: AI systems must be designed to avoid biases and ensure ethical use. This includes the ethical implications of data erasure and the potential for AI to perpetuate discrimination if not properly managed.

To address these challenges, a combination of technical solutions, such as enhanced data anonymization and sophisticated machine learning techniques, alongside legal reforms and clear guidelines, may be necessary.

Additionally, transparency in AI processes and the development of AI systems that can adapt to data erasure requests without significant performance loss are crucial.

Transparency: this article was written by Justin Flitter, enhanced in Perplexity, then edited and reviewed by before publishing.

Justin Flitter

Founder of NewZealand.AI.

http://unrivaled.co.nz
Previous
Previous

The AI Show - April 2024 featuring futurist Ben Reid.

Next
Next

The AI Show - March 2024