New Delhi, March 5: A new investigation has raised serious privacy concerns around Meta’s AI training practices, revealing that human contractors reviewing footage from the company’s Ray-Ban smart glasses may have access to highly sensitive and personal recordings, including private and intimate moments involving s*xual activity.

According to reports by Swedish newspaper Svenska Dagbladet, workers in Kenya tasked with reviewing footage for Meta’s artificial intelligence systems have described encountering extremely private content captured unintentionally by the wearable devices.

Human Reviewers Behind AI Training

Meta’s AI-powered Ray-Ban smart glasses allow users to record videos, take photos, and interact with the company’s AI assistant. However, the technology relies on a “human-in-the-loop” system, where real people review images and videos to help train AI models to better recognize objects and understand real-world contexts. Meta Can Read WhatsApp Messages Despite End-to-End Encryption; Company Rejects Such Claims as ‘Frivolous Work of Fiction’.

Contractors working through third-party companies in Kenya reportedly label and categorize thousands of clips daily. Because the glasses can be activated by voice or touch and may sometimes record unintentionally, workers say the footage often includes private scenes, such as users changing clothes, using bathrooms, or engaging in s*xual activity.

These reviewers are required to classify the material as part of the AI training process.

Privacy and Consent Concerns

The revelations have intensified debates about privacy in wearable technology. While Meta’s terms of service mention that user data may be used to improve its services, critics argue that many users may not realize their recordings could be viewed by human moderators.

Privacy advocates also point out that wearable cameras capture not just the user but also people nearby who may have no idea they are being recorded. Meta to Introduce Facial Recognition Features on Ray-Ban Smart Glasses to Identify People in Real Time: Report.

Once uploaded to Meta’s servers for AI development, this data can potentially expose the private moments of both users and bystanders.

Psychological Toll on Moderators

The reports also highlight the emotional strain on contractors reviewing the footage. Workers described the experience as uncomfortable and sometimes disturbing, saying they regularly see intimate or explicit material while performing routine annotation tasks.

Technology companies have faced similar criticism in the past for outsourcing content moderation and AI training work to lower-wage regions, where workers often deal with distressing material with limited psychological support.

Meta’s Response and Safeguards

Meta has previously stated that it uses automated tools to filter sensitive footage and blur faces before human reviewers see the data. However, workers involved in the process claim that these filters do not always prevent private or explicit content from reaching them.

Growing Scrutiny as AI Race Intensifies

The controversy comes at a time when major tech companies, including Meta, Google, and OpenAI, are racing to build multimodal AI systems capable of understanding images, video, and audio in real time.

Devices such as smart glasses provide companies with vast amounts of real-world visual data needed to train these advanced systems.

However, the latest reports have renewed calls for stronger regulation and transparency around how wearable devices collect data and how that data is used for AI training.

As wearable AI becomes more common, experts warn that balancing technological innovation, user privacy, and ethical labor practices will remain a major challenge for the tech industry.

Rating:3

TruLY Score 3 – Believable; Needs Further Research | On a Trust Scale of 0-5 this article has scored 3 on LatestLY, this article appears believable but may need additional verification. It is based on reporting from news websites or verified journalists (Svenska Dagbladet), but lacks supporting official confirmation. Readers are advised to treat the information as credible but continue to follow up for updates or confirmations

(The above story first appeared on LatestLY on Mar 05, 2026 05:24 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).