The European Union’s efforts to regulate Artificial Intelligence (AI) have come under heavy scrutiny from both the United States and major tech companies. In a move that has raised concerns over potential deregulation, the US Mission to the EU has formally urged the European Commission to soften its AI regulatory approach. At the same time, tech giants like Google, Microsoft, Meta, Amazon, and OpenAI have used their influence to push back against key provisions in the EU’s proposed AI Code of Practice. This lobbying has led to questions about whether the final version of the Code will adequately protect consumer rights, competition, and innovation.
Tech Giants Lobby Against Stricter AI Regulations
In a series of internal documents and interviews revealed by Corporate Europe Observatory and LobbyControl, tech giants appear to have leveraged their structural advantages to weaken the EU’s AI regulations. The companies were reportedly involved in exclusive workshops with EU leaders, which allowed them to influence the content of the AI Code. Fifteen of the most prominent AI developers were granted privileged access during these private sessions, giving them an upper hand in shaping the regulations. In contrast, smaller civil society groups, SMEs, and publishers had limited opportunities to contribute to the discussions.
Limited Stakeholder Engagement in AI Regulation Process
The EU Commission invited approximately 1,000 stakeholders to participate in the consultation process for the AI Code. However, the engagement from non-industry participants was constrained. Many were only allowed limited interaction through online platforms such as SLIDO, which featured emoji-based voting and written comments. These restricted channels for input raised concerns among rights holders and media organizations, who argue that their ability to influence the Code’s provisions on copyright protections and AI ethics was insufficient.
US Government Criticism of EU’s AI Regulatory Approach
The US Mission to the EU has voiced strong opposition to the EU’s current approach to AI regulation. According to sources, the US government’s objection centers around the belief that overly strict regulations could stifle innovation and place US tech companies at a disadvantage. This echoes the stance of the Trump administration, which had previously criticized European tech laws for hindering American competitiveness. The US’s formal objections have added pressure on the European Commission to reconsider certain elements of the AI Code.
The EU Commission’s Struggle to Balance Industry and Public Interests
The process of drafting the AI Code began in September 2024, with 13 Commission-appointed experts leading the effort through workshops and plenary sessions. These meetings were intended to gather diverse feedback; however, critics argue that they ultimately favored industry participants over other stakeholders. Civil society organizations have raised concerns that this imbalance may lead to regulations that benefit large tech corporations at the expense of consumers and smaller competitors.
Bram Vranken from the Corporate Europe Observatory expressed concern that the Commission’s focus on “simplification” would open the door to excessive corporate lobbying, which could ultimately undermine the integrity of the AI Code of Practice. The EU Commission has yet to confirm whether it will meet the May 2 deadline for publishing the final version of the Code. Officials have indicated that the Code and its guidance will likely be released between May and June 2025.
Delays and Criticisms Over AI Code’s Integrity
The EU Commission’s delay in finalizing the AI Code has sparked further criticism. Many industry watchers believe that the Code will undergo multiple revisions before its official release later this year. The growing political pressure from both the US and large tech firms is likely to shape the final outcome. Critics warn that without stronger safeguards, the AI Code could fall short of addressing the long-term societal impacts of AI, such as job displacement, algorithmic bias, and privacy concerns.
As the EU’s consultation on the General Purpose AI guidelines continues, there is mounting concern that the final version of the AI Code may not reflect the interests of the broader public. Observers believe that political and industry pressure will lead to further revisions, possibly weakening provisions that could have protected consumers from harmful AI practices.
Despite these challenges, the ongoing debate underscores the importance of developing a regulatory framework that balances innovation with accountability. As both industry giants and governments vie for influence, the EU faces a crucial test in its ability to create a fair and comprehensive regulatory environment for AI technologies.
Author
-
Rudolph Angler is a seasoned news reporter and author at New York Mirror, specializing in general news coverage. With a keen eye for detail, he delivers insightful and timely reports on a wide range of topics, keeping readers informed on current events.
View all posts