Privacy Rights Group Files Complaint Against OpenAI

Privacy Rights Group Files Complaint Against OpenAI

A privacy group has filed a complaint against OpenAI. The complaint follows a serious mistake made by ChatGPT, the company’s AI chatbot. The AI falsely accused a Norwegian man of murder. The complaint was made by Noyb, a privacy rights group based in Austria. They say OpenAI violated European data protection laws by sharing incorrect information.

The False Murder Accusation

Arve Hjalmar Holmen, a man from Norway, asked ChatGPT about himself. The AI gave a completely false story. It claimed that Holmen had been convicted of murder. It said he had killed two sons and tried to kill a third. The response also mentioned that he was sentenced to 21 years in prison.

While this story was completely made up, some of the details were accurate. ChatGPT correctly mentioned Holmen’s hometown and the number and gender of his children. But the murder itself never happened. This issue is an example of what’s known as AI “hallucinations.” This happens when the AI gives false or misleading information.

The Legal Complaint

After the incident, Noyb took action. The privacy group filed a formal complaint with Norway’s Data Protection Authority (Datatilsynet). They argue that OpenAI broke Article 5(1)(d) of the GDPR. This part of the law says that companies must make sure personal data is correct and up to date. Noyb believes OpenAI violated this rule by sharing incorrect data about Holmen.

In its complaint, Noyb included a screenshot of the response from ChatGPT. However, the group chose not to include the exact date of the interaction. Since then, OpenAI has updated its AI. Now, when people ask about Holmen, ChatGPT no longer makes the false accusation. But Noyb argues that the wrong data might still exist in OpenAI’s system. They are concerned that the false information may be stored and used in future training for the AI model.

Holmen’s Concerns

Holmen has expressed deep worry about the situation. He fears that people will believe the false story. Even though OpenAI has corrected the mistake, he is concerned that some may still think it’s true. Holmen said, “Some people think that where there’s smoke, there’s fire. That scares me the most.”

This is a serious concern. AI-generated falsehoods can harm people’s reputations. If AI continues to make such mistakes, it could cause a lot of damage. Holmen’s case highlights how important it is for AI companies to make sure their models do not spread false information.

The Privacy Group’s Demands

Noyb wants OpenAI to do more than just update the model. The privacy group has demanded that OpenAI completely remove the false information about Holmen. They also want OpenAI to change its AI system to avoid similar mistakes in the future. In addition, Noyb is asking for an administrative fine against OpenAI. This is to make sure that OpenAI follows data protection laws more carefully.

Kleanthi Sardeli, a lawyer from Noyb, criticized AI companies for ignoring privacy laws. She said, “Adding a disclaimer does not make the law disappear. AI firms cannot ignore GDPR.” Sardeli also warned that AI errors, like the one in this case, could cause serious harm if left uncorrected.

The Importance of Data Privacy

The case against OpenAI is an important one. It raises serious questions about how AI systems handle personal data. The GDPR is a law that protects personal data in the European Union. It requires companies to make sure that the data they use is correct. It also says that companies must delete or correct incorrect data.

This complaint shows that AI companies need to be more careful. They must make sure that their models do not spread false information. Noyb is trying to hold OpenAI accountable for its actions. If the case leads to legal action, it could set a big precedent. This would help shape how AI companies deal with personal data in the future.

The Future of AI and Privacy

This case could affect how AI companies handle data in the future. If OpenAI loses, other AI companies might have to make changes. They will need to improve their models to avoid making similar mistakes. The issue of AI-generated misinformation is something that needs attention. AI systems are becoming more common, and it is important that they do not harm people.

As AI becomes more integrated into our lives, it’s crucial to keep privacy and accuracy in mind. AI companies will need to make sure they follow data protection laws. They must also work to prevent any further mistakes. Holmen’s case shows how much damage can happen if AI spreads incorrect information.

OpenAI’s Response

So far, OpenAI has not responded to the complaint. The company has made changes to the model so that it no longer gives false information about Holmen. However, the concerns raised by Noyb are not going away. More steps need to be taken to ensure AI models do not spread false data. This case might be just the beginning of a larger conversation about AI, privacy, and responsibility.

Author

  • Richard Parks

    Richard Parks is a dedicated news reporter at New York Mirror, known for his in-depth analysis and clear reporting on general news. With years of experience, Richard covers a broad spectrum of topics, ensuring readers stay updated on the latest developments.

    View all posts