As the U.S. election approaches, social media platforms like Meta (Facebook, Instagram), Google (YouTube), and TikTok are stepping up their efforts to combat the spread of misinformation. These companies are placing temporary restrictions on political advertisements to prevent manipulation of public opinion during an uncertain post-election period when results may take days to finalize. However, experts argue that these late efforts may be insufficient to counteract the widespread misinformation already circulating on these platforms.
Ad Pauses Ahead of Election Day
In a bid to reduce the potential for election-related manipulation, Meta recently imposed a temporary ban on political ads across Facebook and Instagram. The restriction, initially set to end on Election Day, was extended to run for several days longer. Similarly, Google announced it would pause political ads after the last polls close on Tuesday, though it hasn’t specified when the pause will be lifted. TikTok has prohibited political ads since 2019 and continues to uphold this policy.
However, these measures stand in stark contrast to X (formerly Twitter), which lifted its ban on political ads in 2023 under Elon Musk’s ownership. Since then, X has allowed political ads to run freely, and no additional restrictions have been imposed for the upcoming election.
The ad pauses aim to prevent premature claims of victory or misleading messages aimed at manipulating voters during a period of heightened uncertainty. But while these moves are seen as positive steps, critics argue they may do little to mitigate the larger problem of misinformation that is already widespread across social media platforms.
Misinformation Floods Social Media
The ad restrictions come as misinformation about the election has already taken root across social media. False claims about mail-in voting fraud, rigged machines, and other election conspiracies have proliferated, often with little oversight or fact-checking. These claims have been amplified by both public figures and anonymous sources, with little intervention from the platforms.
Former President Donald Trump and many of his supporters have repeatedly spread baseless accusations that Democrats are attempting to steal the election. With the rise of generative artificial intelligence tools, deepfakes, and manipulated media, the spread of misleading content has become even more potent, making it harder to distinguish truth from fiction.
While the tech companies’ ad bans are a step in the right direction, experts believe the damage caused by misinformation is already deeply embedded in the online ecosystem. “The platforms have made many mistakes over the last few years in terms of weakening policies that could combat disinformation,” said Imran Ahmed, CEO of the Center for Countering Digital Hate. “Stopping ads for a few days is unlikely to change much when misinformation is already ingrained across these platforms.”
A Deterioration of Content Moderation
The decline in social media platforms’ commitment to moderating harmful content is another factor contributing to the worsening misinformation problem. In the aftermath of the 2016 U.S. election and the January 6, 2021, Capitol insurrection, tech platforms made significant investments in content moderation and security teams. These included efforts to remove posts that spread lies about the election and to suspend accounts promoting false narratives.
However, in recent years, many of these platforms have scaled back their efforts. Trust and safety teams have been downsized, and policies designed to limit the spread of misinformation have been weakened. For example, Meta announced last year that it would no longer remove claims about the 2020 election being “stolen,” a policy that had previously been a cornerstone of its efforts to combat disinformation.
Sacha Haworth, executive director of the Tech Oversight Project, described this shift as a “backslide.” She noted that platforms once seen as leaders in combating misinformation—like Facebook, Twitter, and YouTube—have now reversed many of their previous policies. “Platforms are hotbeds for false narratives,” she said.
The consequences of this backslide have become evident in the months leading up to the 2024 election. Conspiracy theories surrounding the Biden administration, the economy, and natural disasters have spread rapidly across social media, undermining the credibility of official sources and creating division among the public.
On X, Musk’s own controversial posts, often supporting Trump or spreading false claims about voting, have fueled much of the misinformation. An analysis by Ahmed’s group found that Musk’s posts this year generated over 2 billion views, further amplifying misleading information.
Is It Too Late to Reverse the Damage?
Despite the platforms’ efforts to rein in political advertising, many experts argue that it’s too late to reverse the damage caused by years of unchecked misinformation. “Over the last four years, we’ve had a steady stream of lies about elections and democracy,” said Ahmed. “It’s too late to expect a quick fix now.”
The proliferation of false narratives has eroded trust in the electoral process, and even the most stringent ad policies are unlikely to undo that damage. Social media algorithms are designed to promote highly engaging content, including misleading or extreme viewpoints, making it difficult for platforms to control what information gets amplified.
“Stopping political ads might seem like a helpful step, but it won’t stop the organic spread of disinformation that’s already ingrained in the platform’s algorithms,” Ahmed explained.
What Platforms Are Doing to Combat Misinformation
While Meta, YouTube, TikTok, and others emphasize their efforts to limit misinformation, the effectiveness of these measures is up for debate. TikTok states it works with fact-checkers to label and limit the spread of unverified claims, while Meta claims it removes content that could harm voters’ ability to cast ballots. YouTube has invested in new policies to prevent election interference and remove content that incites violence or spreads false claims.
Yet, there is often a gap between policy and enforcement. For example, X (Twitter) has faced significant backlash for allowing misleading or false content to thrive, including tweets from Musk himself that question the legitimacy of the election process. This discrepancy raises concerns about the platform’s ability to combat disinformation effectively.
Meta and YouTube have also faced criticism for allowing videos or posts that prematurely declare a winner before results are officially announced, which could contribute to public confusion.
A Growing Divide
The issue of misinformation extends beyond political ads. It has become clear that social media platforms are struggling to balance free expression with the need for responsible content moderation. Platforms like Meta and X argue that they are committed to providing users with authoritative sources of information, but the overall effectiveness of these measures remains questionable. Disinformation continues to flow, aided by algorithms that prioritize engagement over accuracy.
In particular, X (Twitter) under Musk’s ownership has become a significant source of misleading election-related content. Musk’s leadership has shifted the platform’s focus, emphasizing free speech over content moderation. As a result, many experts believe that X has become a breeding ground for conspiracy theories and other harmful content.
Conclusion: A Need for Stronger Action
While the temporary ad bans are a step toward addressing the growing threat of election misinformation, they may be too little, too late to fully counter the damage done by years of unchecked content spread across social media platforms. Experts argue that these platforms must do more than just pause ads—they must recommit to rigorous enforcement of content moderation policies and invest in strengthening the integrity of their systems long after the election.
If misinformation continues to flourish online, it could undermine public trust in the election process, regardless of the steps taken by tech companies in the lead-up to Election Day. For now, the question remains: will these measures be enough to curb the tide of disinformation, or is it already too late to undo the damage?
Author
-
Silke Mayr is a seasoned news reporter at New York Mirror, specializing in general news with a keen focus on international events. Her insightful reporting and commitment to accuracy keep readers informed on global affairs and breaking stories.
View all posts