OpenAI has announced that it blocked more than 250,000 requests to generate deepfake images of US election candidates, as part of its efforts to ensure the integrity of its AI tools. These requests, which sought to create AI-generated images of figures such as President-elect Donald Trump, Joe Biden, Kamala Harris, and their respective vice-presidential picks, were rejected under OpenAI’s enhanced safety protocols.
In a blog update published on Friday, OpenAI explained that these precautions were specifically put in place ahead of the US elections to prevent the misuse of its platforms. The company highlighted that the decision to block these requests was a key part of its commitment to avoid the spread of harmful or deceptive content, especially during such a sensitive political period.
“These safety measures are crucial to prevent our technology from being exploited for malicious purposes, particularly in the context of elections,” the blog stated. OpenAI further assured that no evidence had been found to suggest that any US election-related influence campaigns had successfully gone viral on their platforms.
Earlier this year, OpenAI took action to shut down an Iranian influence operation known as Storm-2035, which had been using the platform to create politically charged content. Accounts associated with this operation were subsequently banned. In October, the company revealed it had disrupted over 20 additional influence operations globally that attempted to misuse its tools.
Despite these global efforts, OpenAI’s report noted that none of the election-related operations managed to achieve significant viral engagement through its systems.
Author
-
Richard Parks is a dedicated news reporter at New York Mirror, known for his in-depth analysis and clear reporting on general news. With years of experience, Richard covers a broad spectrum of topics, ensuring readers stay updated on the latest developments.
View all posts