Category: News | Published: 2026-03-17
Meta Hit With Class-Action Lawsuit Over AI Smart Glasses
Meta is facing a major class-action lawsuit over its Ray-Ban AI smart glasses after it emerged that human contractors were reviewing private footage captured by the devices - including intimate and sensitive content. The case raises urgent questions about AI privacy for any organisation adopting wearable AI technology.
What Triggered the AI Smart Glasses Lawsuit
The legal action was filed in early March 2026 at the U.S. District Court for the Northern District of California. The Clarkson Law Firm, which specialises in public interest litigation, brought the case on behalf of two plaintiffs from New Jersey and California.
The lawsuit was sparked by investigations from Swedish journalists at *Svenska Dagbladet* and *Göteborgs-Posten*. Their reporting revealed that workers at Sama, a Nairobi-based data annotation company contracted by Meta, had been manually reviewing footage from users' AI smart glasses. Those workers described viewing nudity, sexual encounters, bathroom visits, bank card details, private messages and footage of children.
One worker was quoted as saying: "We see everything - from living rooms to naked bodies."
The Core AI Privacy Allegations
The lawsuit alleges that Meta marketed its AI smart glasses with slogans like "designed for privacy, controlled by you" and "built for your privacy." The plaintiffs argue these claims were materially misleading because Meta never adequately disclosed that human contractors would review user footage as part of its AI training pipeline.
The complaint describes the glasses as having been transformed "from a personal device into a surveillance conduit," exposing consumers to risks of emotional distress, identity theft, stalking and reputational harm.
With over seven million pairs of AI smart glasses sold during fiscal 2025 alone - more than double the combined sales of 2023 and 2024 - millions of users could potentially be affected.
How AI Data Reaches Human Reviewers
Meta's own statements confirm that photos and videos captured by the AI smart glasses remain on the device unless a user actively shares them with Meta AI or another person. However, voice interactions triggered by the "Hey Meta" wake word are automatically transmitted to Meta's cloud servers, with no option to prevent this.
When users engage multimodal AI features - for example, asking the glasses to analyse what they see - that footage is sent to Meta's servers for processing. It is this content that can then be reviewed by human contractors employed to label and categorise data for AI training.
Meta's own AI terms do state that "Meta will review your interactions with AIs… and this review may be automated or manual (human)," but critics argue this buried wording does not make the extent of human review clear to everyday users. Meta has also pointed to filtering techniques such as face blurring, which are intended to reduce the risk of identifying individuals in reviewed footage. However, workers at Sama reported that these filtering systems did not always function correctly, meaning private and identifying content still reached human reviewers.
A Quiet Policy Change That Raised Alarms
In April 2025, Meta quietly updated the AI smart glasses privacy policy. AI features were made the default setting with no opt-out available. Voice recordings are now stored in Meta's cloud for up to one year to improve AI products. Users can manually delete recordings, but they cannot prevent the initial collection.
This policy shift essentially means that anyone using the AI smart glasses' voice or visual AI features is contributing personal data to Meta's AI training programme - whether they realise it or not.
Regulatory Response to AI Smart Glasses Concerns
The UK's Information Commissioner's Office (ICO) has launched its own investigation into the matter. For UK businesses, this is significant. Any organisation considering deploying wearable AI devices - whether for fieldwork, inspections, logistics or customer interactions - must now consider the data protection implications very carefully.
Under UK GDPR, organisations are required to understand how personal data flows through any AI-enabled technology they adopt, including where that data is processed, who has access to it and whether third-party contractors are involved. The Meta AI smart glasses case is a textbook example of how opaque AI data pipelines can create compliance risks.
What This Means for UK Businesses
This lawsuit is not just about one product. It highlights a pattern that is becoming increasingly common as AI is embedded into everyday hardware. The key lessons for UK businesses are:
- Audit your AI tools. If your organisation uses any AI-powered devices or services, understand exactly where user data goes and who can access it. Do not assume that "on-device processing" means data never leaves the device.
- Review vendor privacy policies. Policies can change quietly, as Meta's did. Schedule regular reviews of the privacy terms for any AI tools your business relies on.
- Consider employee and customer privacy. If staff use AI smart glasses or similar wearables in the workplace, you may be capturing personal data belonging to colleagues, clients or members of the public - all of which falls under GDPR.
- Implement a formal AI governance policy. As AI adoption accelerates, businesses need clear internal policies covering which AI tools are approved, how data is handled and what oversight exists.
If your business needs help assessing the cyber security and privacy implications of AI tools, or if you want to ensure your data protection policies are fit for purpose, Cloud Smart Solutions can help. Our team works with businesses across Buckinghamshire to build practical security frameworks that keep pace with emerging technology risks.
The Bigger Picture
The Meta AI smart glasses lawsuit is part of a broader reckoning with how AI companies handle user data. It also shines a light on the hidden workforce behind AI systems - the thousands of human data annotators and reviewers whose labour is essential to training AI models, yet whose role is rarely disclosed to end users. As wearable AI technology becomes more capable and more popular, the boundary between personal convenience and corporate surveillance grows thinner. For consumers and businesses alike, the message is clear: understand what your AI devices are actually doing with your data before it is too late.