Privacy rights group noyb has filed a complaint against OpenAI with Norway’s data protection authority, Datatilsynet, after ChatGPT falsely accused a Norwegian man of murdering his children — the latest and most disturbing example of “AI hallucinations” produced by the popular chatbot.
The case centers on Arve Hjalmar Holmen, who asked ChatGPT if it had any information about him. To his shock, the chatbot fabricated a detailed narrative alleging he had murdered two of his children and attempted to kill a third. The AI-generated story was not only entirely false, but included real personal details, such as the number and gender of his children and the name of his hometown — a combination that gave the hallucination an alarming sense of credibility.
“The fact that someone could read this output and believe it is true is what scares me the most,” Holmen said in a statement released by noyb.
This isn’t an isolated incident. Other cases have emerged where ChatGPT falsely accused people of sexual harassment, bribery, or abuse. OpenAI has responded with a disclaimer that the tool “may produce false results” and should not be relied upon for factual information. However, noy* argues that such disclaimers are not enough to comply with the European Union’s General Data Protection Regulation (GDPR).
“The GDPR is clear,” said noyb lawyer Joakim Söderberg. “Personal data must be accurate. A disclaimer doesn’t make it okay to spread false information.”
Under Article 5(1)(d) of the GDPR, organizations are required to ensure the accuracy of the personal data they process. According to noyb, OpenAI has stated it cannot correct such hallucinated data, only block certain prompts. This still allows false information to exist within the AI model’s training data, potentially resurfacing in future outputs.
Holmen’s case underscores broader concerns about the reliability and accountability of generative AI systems. While OpenAI has updated ChatGPT to incorporate real-time web searches, making it less likely to generate false biographical data, noyb notes the original misinformation may still remain embedded in the system.
The complaint urges Datatilsynet to order OpenAI to delete the defamatory content, retrain the model to prevent future inaccuracies, and impose administrative fines to deter future violations. Sardeli added, “AI companies must stop acting like the GDPR doesn’t apply to them — it absolutely does.”
Need Help?
If you’re wondering how Australia’s AI policy, or any other government’s bill or regulations could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.