Meta AI Controversy: Row Erupts As Kenyan Workers Who Say They Saw Smart Glasses Users Having S*x Lose Jobs

Meta Platforms is facing mounting scrutiny after cancelling a major contract with outsourcing firm Sama, shortly after Kenya-based workers alleged they were exposed to graphic and sensitive content, including s*xual activity, while training AI systems linked to Meta’s smart glasses.

Meta Ray-Ban Display HUD Glasses (Photo Credits: Meta)

Meta Platforms is facing mounting scrutiny after cancelling a major contract with outsourcing firm Sama, shortly after Kenya-based workers alleged they were exposed to graphic and sensitive content, including s*xual activity, while training AI systems linked to Meta’s smart glasses.

The controversy began in February when workers at Sama told Swedish publications Svenska Dagbladet and Göteborgs-Posten that they had reviewed disturbing footage captured by users of Meta’s AI-powered glasses. According to their accounts, the material included private moments such as people using the toilet and engaging in s*x, raising serious concerns about user privacy and worker well-being. Using Meta Smart Glasses? Contractors Training the AI Report Seeing Users Poop, Undress and Even Have S*x.

“We see everything - from living rooms to naked bodies,” one worker reportedly said.

Less than two months after these revelations surfaced, Meta terminated its contract with Sama, a move that the company confirmed would result in 1,108 job losses. Meta has stated that the decision was based on Sama failing to meet its standards, though it has not provided detailed clarification. Sama has strongly rejected the claim, asserting that it consistently met all operational, security, and quality benchmarks. Meta Can Read WhatsApp Messages Despite End-to-End Encryption; Company Rejects Such Claims as ‘Frivolous Work of Fiction’.

“Sama has consistently met the operational, security and quality standards required across our client engagements, including with Meta,” the company said. “At no point were we notified of any failure to meet those standards, and we stand firmly behind the quality and integrity of our work.”

However, worker advocacy groups in Kenya have questioned Meta’s explanation. Africa Tech Workers Movement has alleged that the contract termination may be linked to employees speaking out about their working conditions and the nature of the content they were asked to review.

Naftali Wambalo of the Africa Tech Workers Movement suggested that the issue goes beyond performance standards. “What I think are the standards they are talking about here are standards of secrecy,” he told BBC News.

Meta has not directly addressed this allegation but reiterated that it had “decided to end our work with Sama because they don't meet our standards.”

The controversy has also drawn the attention of regulators. The UK’s Information Commissioner's Office described the reports as “concerning” and reached out to Meta for clarification. Meanwhile, the Office of the Data Protection Commissioner has launched an investigation into potential privacy violations linked to the use of smart glasses.

Meta acknowledged that human reviewers may sometimes assess content captured by its devices, but only when users have shared it with Meta AI and provided consent. The company maintains that such reviews are standard practice in improving AI systems and enhancing user experience.

“Photos and videos are private to users. Humans review AI content to improve product performance, for which we get clear user consent,” a Meta spokesperson said.

The smart glasses, developed in collaboration with brands like Ray-Ban and Oakley, include features such as real-time translation and AI-powered assistance. While these innovations are designed to improve accessibility, particularly for visually impaired users, they have also sparked growing concerns about misuse and surveillance.

In one reported case, a device continued recording in a private setting, capturing footage of a woman undressing without her knowledge. Although the glasses feature a recording indicator light, critics argue that this safeguard is insufficient to prevent non-consensual recording.

The situation has reignited debate over the ethical implications of AI training and the human cost behind technological advancements. Sama, once known for its mission to create ethical tech employment, has previously faced criticism over content moderation work linked to Meta, with some workers describing exposure to traumatic material.

Legal experts and activists warn that the incident highlights deeper structural issues in the AI outsourcing ecosystem. Mercy Mutemi, a lawyer and executive director of the Oversight Lab, said the situation should serve as a wake-up call.

“We've been told that this is our entry route into the AI ecosystem,” she said. “This is a very flimsy foundation to build your entire industry on.”

As investigations continue, the Meta AI controversy underscores the urgent need for stronger safeguards around data privacy, worker protection, and transparency in AI development.

Rating:3

TruLY Score 3 – Believable; Needs Further Research | On a Trust Scale of 0-5 this article has scored 3 on LatestLY, this article appears believable but may need additional verification. It is based on reporting from news websites or verified journalists (BBC), but lacks supporting official confirmation. Readers are advised to treat the information as credible but continue to follow up for updates or confirmations

(The above story first appeared on LatestLY on Apr 30, 2026 06:36 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).

Share Now

Share Now