AIGovHub
Vendor Tracker
CCM PlatformSentinelProductsPricing
AIGovHub

The AI Compliance & Trust Stack Knowledge Engine. Helping companies become AI Act-ready.

Tools

  • AI Act Checker
  • Questionnaire Generator
  • Vendor Tracker

Resources

  • Blog
  • Guides
  • Best Tools

Company

  • About
  • Pricing
  • How We Evaluate
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  • Affiliate Disclosure

© 2026 AIGovHub. All rights reserved.

Some links on this site are affiliate links. See our disclosure.

GDPR Complaints Escalate: ChatGPT Hallucinations and Twitter's AI Training Under Fire
AI GDPR compliance
ChatGPT data privacy
Twitter AI training GDPR
EU AI Act
data protection

GDPR Complaints Escalate: ChatGPT Hallucinations and Twitter's AI Training Under Fire

AIGovHub EditorialMarch 28, 20260 views

What Happened: Key GDPR Complaints Against AI Systems

Two high-profile cases have intensified scrutiny of AI systems under the General Data Protection Regulation (GDPR), Regulation (EU) 2016/679, which has been in effect since 25 May 2018. These incidents demonstrate how AI development can clash with fundamental data protection principles.

ChatGPT's Hallucinations and Data Accuracy Violations

OpenAI's ChatGPT generated false and defamatory personal information about individuals, including a Norwegian user falsely accused of murder. The privacy organization noyb filed a complaint with Norwegian authorities, arguing this violates GDPR's data accuracy principle under Article 5(1)(d). OpenAI responded by adding disclaimers about potential inaccuracies and implementing internet searches for verification, but stated it cannot correct false data internally—only block it from certain prompts. This raises serious compliance concerns, as GDPR requires personal data to be accurate and, where necessary, kept up to date.

Twitter's Unauthorized AI Training Data Processing

Twitter International (rebranded as X) used personal data of over 60 million EU/EEA users to train its AI technology 'Grok' without obtaining consent or providing prior notification. The Irish Data Protection Commission (DPC) initiated court proceedings, though critics note the focus has been on procedural issues rather than core GDPR violations. In response, noyb filed nine GDPR complaints across Austria, Belgium, France, Greece, Ireland, Italy, Netherlands, Spain, and Poland to pressure comprehensive enforcement. Twitter claimed 'legitimate interest' as its legal basis—an approach previously rejected by courts in Meta's case—highlighting tensions between AI innovation and data protection rights.

Why It Matters: GDPR and AI Governance Implications

These cases underscore critical compliance risks as AI systems become more integrated into business operations. Under GDPR, organizations must adhere to key principles: lawfulness, fairness, transparency, data minimization, accuracy, and accountability. The incidents reveal specific violations:

  • Data Accuracy (Article 5(1)(d)): ChatGPT's hallucinations demonstrate how AI-generated false personal data can breach accuracy requirements, leading to reputational harm and potential penalties of up to EUR 20 million or 4% of global annual turnover.
  • Lawful Basis for Processing (Article 6): Twitter's use of personal data for AI training without consent challenges lawful processing grounds. Consent must be freely given, specific, informed, and unambiguous, while 'legitimate interest' requires balancing tests that may not justify large-scale AI training.
  • Automated Decision-Making (Article 22): GDPR provides rights related to automated processing, including profiling, which AI systems often involve. Organizations must ensure meaningful human oversight and transparency.

These issues are compounded by the upcoming EU AI Act, Regulation (EU) 2024/1689, which entered into force on 1 August 2024. AI systems used in recruitment/HR are classified as HIGH-RISK under Annex III, with obligations applying from 2 August 2026. The AI Act emphasizes risk management, transparency, and human oversight, aligning with GDPR's data protection principles. For example, the EU AI Office oversees general-purpose AI models, coordinating enforcement that may intersect with data privacy authorities.

Furthermore, AI hallucinations pose operational risks beyond compliance. False outputs can lead to erroneous decisions, legal liabilities, and loss of trust. As seen in AI truth crises, verifying AI-generated content is a growing challenge for enterprises.

What Organizations Should Do: Actionable Compliance Steps

To mitigate GDPR and AI Act risks, businesses should adopt proactive governance measures. Here are key action items:

  1. Implement Robust AI Governance Frameworks: Adopt frameworks like the NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, which includes core functions: Govern, Map, Measure, and Manage. Consider certifiable standards like ISO/IEC 42001, published in December 2023, for AI Management Systems. Tools like Holistic AI or Credo AI can help automate governance, but organizations should evaluate options based on their needs. For comparisons, see AIGovHub's vendor analysis.
  2. Ensure Data Protection by Design: Integrate GDPR principles into AI development. Conduct Data Protection Impact Assessments (DPIAs) for high-risk processing, as required by GDPR. For AI training, obtain explicit consent or validate lawful bases, avoiding reliance on 'legitimate interest' without rigorous assessment. Learn from modifying AI systems to incorporate privacy safeguards.
  3. Enhance Transparency and Accuracy Controls: Disclose AI usage to users, as mandated by GDPR's transparency principle. Implement mechanisms to monitor and correct inaccurate personal data generated by AI, moving beyond simple blocking. For instance, use verification tools or human review loops, as discussed in AI assessment strategies.
  4. Prepare for Cross-Regulatory Compliance: Align AI governance with both GDPR and the EU AI Act. For high-risk AI systems, plan for obligations effective from 2 August 2026, including risk management and documentation. Stay updated on enforcement actions, such as those by the EU AI Office, and monitor similar incidents like TikTok's DSA breaches for lessons.
  5. Leverage Compliance Intelligence Platforms: Use platforms like AIGovHub to track regulatory changes, assess vendor solutions, and implement best practices. For example, reference comprehensive AI governance guides to navigate evolving requirements.

This content is for informational purposes only and does not constitute legal advice. Organizations should verify current regulatory timelines and consult legal experts for specific compliance needs.