WhatsApp has introduced a new AI feature, claiming it is optional for users, but with the twist that it cannot be removed from the app. The Meta AI icon, a vibrant blue circle with splashes of pink and green, stays fixed on users’ chat screens, opening a chatbot when tapped. Many users, however, are frustrated with the immovable nature of the feature, which adds to their discomfort when using the messaging app.
Meta, the parent company of WhatsApp, compares this feature to others that are similarly unremovable, like the ‘status’ or ‘channels’ options. While WhatsApp officials defend the AI addition, asserting that it offers valuable benefits to users, the growing dissatisfaction over its presence has mirrored previous controversies, such as Microsoft’s Recall tool, which faced backlash for being similarly non-deletable.
WhatsApp’s AI Tool: Permanent Presence and Progressive Rollout
In a move that reflects Meta’s broader strategy to integrate AI across its platforms, the WhatsApp AI feature taps into the company’s Llama 4 language model, a powerful tool designed for a variety of uses, including answering questions and offering creative suggestions. This feature, currently being rolled out progressively, includes a search bar at the top of the screen where users can either “Ask Meta AI” or search the web.
As of now, the new blue circle may not be available to all users, with Meta clarifying that the rollout is gradual. Meanwhile, the feature has also been integrated into other Meta platforms like Facebook Messenger and Instagram, reinforcing Meta’s ambition to weave AI deeper into its ecosystem. A WhatsApp spokesperson emphasized that the AI tool is optional, with users required to read a disclaimer before initiating conversations.
Despite the chatbot’s functional utility — such as providing weather updates and generating creative ideas — it has also raised concerns. One user, for example, requested weather details for Glasgow, only to be given a link about Charing Cross Station in London, which was inaccurate.
Growing Ethical and Privacy Concerns
While some users find value in the new AI assistant, others have raised concerns about its ethical implications and privacy issues. Across Europe, platforms like X and Reddit have seen a surge in posts expressing frustration about the unremovable nature of the AI tool. In particular, AI privacy advisor Dr. Kris Shrishak has voiced criticism, arguing that Meta is exploiting users by forcing them into interacting with the AI without explicit consent.
Dr. Shrishak further claims that the AI training process relies on web-scraped data and pirated content, including books, which raises serious copyright concerns. Reports from The Atlantic suggest that Meta used resources such as Library Genesis to source data for training its Llama model, a practice that has led to lawsuits filed by authors and publishers accusing the company of copyright violations. Meta has not publicly addressed these allegations.
Data Privacy: The Key Challenge
Meta has stated that its chatbot only reads messages that users send to it directly, not their private conversations. The company assures users that end-to-end encryption continues to protect personal chats that do not involve the AI assistant. Nevertheless, the UK Information Commissioner’s Office (ICO) has announced plans to monitor Meta’s AI’s handling of personal data within WhatsApp. The ICO’s concerns are centered around ensuring that companies like Meta comply with strict privacy laws, particularly concerning sensitive data and children’s information.
Dr. Shrishak also warns that even though regular WhatsApp conversations remain encrypted, interactions with the AI feature still involve Meta as an endpoint, potentially exposing users to data risks. Meta has recommended that users avoid sharing private or sensitive information during their chats with the AI, though this guidance may not fully alleviate concerns over data privacy.
Meta’s Push for AI Across Platforms
Meta’s push to incorporate AI into its platforms extends beyond WhatsApp. In the same week, updates were made to Instagram’s teen accounts, and Meta is exploring the use of AI to detect whether minors are lying about their age on the platform. Additionally, the company continues testing AI tools across other services, including Facebook Messenger and Instagram, positioning AI as a central part of its strategy.
Despite these integrations, the increasing unease surrounding the use of AI has prompted some to call for stronger regulation and transparency. The debate over Meta’s AI presence on WhatsApp is far from over, as concerns about data privacy, user autonomy, and ethical practices continue to grow.
The Unavoidable AI Feature
In conclusion, Meta’s AI chatbot on WhatsApp has sparked significant debate over its optional yet permanent nature. While the tool offers potential benefits, including providing users with instant information and creative ideas, the ethical and privacy concerns surrounding its use are undeniable. As Meta expands AI integration across its platforms, the question remains: is this convenience worth the potential risks to user privacy and data security?
Author
-
Silke Mayr is a seasoned news reporter at New York Mirror, specializing in general news with a keen focus on international events. Her insightful reporting and commitment to accuracy keep readers informed on global affairs and breaking stories.
View all posts