Big tech companies play an outsized role in drafting EU-recognized standards for artificial intelligence (AI) tools, according to a report by Corporate Europe Observatory (CEO).
The report found that over half (55%) of the 143 members of the joint technical committee on AI (JTC21), established by European standardisation bodies CEN and CENELEC, are from companies or consultancies. This includes 54 corporate representatives and 24 from consultancies.
Notably, almost 25% of these members represent US companies, with four members each from Microsoft and IBM, two from Amazon, and at least three from Google. In contrast, civil society groups account for only 9% of JTC21 members, raising concerns about the inclusivity of the standard-setting process.
The AI Act, the world’s first attempt to regulate AI using a risk-based approach, was approved in August and will gradually come into force. The European Commission tasked CEN-CENELEC and ETSI in May 2023 to prepare standards for AI products ranging from medical devices to toys. These harmonised standards help companies ensure their products comply with EU safety regulations.
Bram Vranken, a researcher at Corporate Europe Observatory, criticized the Commission’s reliance on private bodies for policymaking. “Standard setting is being used to implement requirements on fundamental rights, fairness, trustworthiness, and bias for the first time,” Vranken said.
Challenges in Ensuring Inclusivity and Accountability
The report argues that the AI Act adopts a business-friendly approach, leaving fundamental rights and complex issues to standard-setting organisations. These organisations often prioritize process over specific outcomes, according to JTC21 Chair Sebastian Hallensleben.
“An AI system might carry a CE mark, indicating compliance with harmonised standards, but that doesn’t guarantee it won’t be biased or discriminatory,” Hallensleben said.
CEO also examined national standard-setting bodies in France, the UK, and the Netherlands. Corporate representatives make up 56%, 50%, and 58% of members in these countries, respectively, highlighting a similar trend of industry dominance.
The European Commission responded to CEO’s concerns by stating that standards delivered by CEN-CENELEC would undergo assessment and only be cited in the Official Journal if they meet the objectives of the AI Act and address high-risk AI systems adequately.
The Commission also noted additional safeguards, such as the ability of Member States and the European Parliament to object to harmonised standards.
Calls for Faster Standard-Setting Processes
Critics warn that the slow pace of standardisation could hinder effective AI regulation. A senior official at the Dutch privacy watchdog, Autoriteit Persoonsgegevens, noted in an interview with Euronews that “time is running out” for the establishment of AI standards.
“Standardisation processes normally take many years. We believe this needs to be accelerated,” the official said.
Jan Ellsberger, chair of ETSI, stated that adopting standards could take months to several years, depending on industry commitment. “The more involvement we have from the industry, the faster the process moves,” Ellsberger explained.
This timeline raises questions about whether the EU can balance inclusivity, accountability, and urgency in developing AI standards to address pressing ethical and safety concerns.
Author
-
Rudolph Angler is a seasoned news reporter and author at New York Mirror, specializing in general news coverage. With a keen eye for detail, he delivers insightful and timely reports on a wide range of topics, keeping readers informed on current events.
View all posts