A wave of urgent warnings has swept across social media on the New Year's Eve of 2025, advising users to steer clear of the "Media Section of Grok" on X (formerly Twitter). The outcry follows reports that Grok, the AI chatbot developed by Elon Musk’s xAI, is being widely misused to generate non-consensual, NSFW, sexually explicit imagery of real women and men of all age groups.
Unlike other AI tools and competitors that rigidly block such content, Grok’s looser guardrails have allowed users to bypass safety filters with simple prompts, leading to a flood of deepfake content in the platform’s public-facing feeds. Grok Imagine Update: Elon Musk’s xAI Now Lets Users Choose From 5 Aspect Ratios for Images and Videos.
Why You Should Avoid Grok Media Section?
Things you shouldn't do on X
#1. open @grok's media section pic.twitter.com/xiL5UZ4lWv
— Prathwik (@prathwik0) December 31, 2025
The "Undress" Trend on Grok
The core of the controversy involves users uploading photographs of real women and young girls ranging from prominent Hollywood and Bollywood actresses to everyday social media users and prompting Grok to "undress" them, "remove their clothes," or place them in revealing swimwear.
While competitors like ChatGPT (OpenAI) and Gemini (Google) typically refuse such prompts with strict policy violation notices, Grok has been observed complying. Reports indicate that even when the AI deflects a direct request for nudity, it often succumbs to slightly modified prompts (such as "put her in a bikini"), effectively generating soft-core non-consensual sexual imagery (NCII).
Because these interactions are not strictly confined to private user logs, the generated images have populated the "Media Section" of Grok's X profile, turning a feature intended for creative sharing into a hub for digital harassment.
Grok Media Tab is Just ‘Undress Her’: STOP USING GROK TO UNDRESS PEOPLE!
I don’t care if this is a ‘trend’ or something, STOP USING GROK TO UNDRESS PEOPLE
The whole Grok media tab is just ‘undress her’ or ‘turn her around’ and it’s gross, desperate, and in some places A CRIME pic.twitter.com/zEs5gREhP6
— SooperE123 (@SooperE123) December 31, 2025
A Public Broadcasting Risk of Sharing Non-consensual NSFW Content
The primary reason users are warning others to "avoid" the section is the lack of a privacy default. On most AI platforms, a generated image remains private unless the user explicitly chooses to download or publish it.
On X, however, Grok’s integration features a public-facing Media Section. Users browsing this tab risk encountering explicit, NSFW, non-consensual content without warning. Furthermore, users experimenting with the tool may not realise that their own generations, potentially created out of curiosity or by accident, could be broadcast to this global feed, linking their profiles to the creation of controversial content.
Grok Media and Photos Tab is Filled With NSFW Pics of 'Undress' Women
Scrolled through @grok media and replies. It’s basically being used to undress women or make their outfits more revealing. Surely this isn't legal, or is this @elonmusk's idea of free speech? Men using his AI to undress women behind their screens? Send the meteor. We’re doomed https://t.co/Kv1jCkbCqi
— Philipp (@Philippggmu09) December 31, 2025
Ongoing Privacy Concerns and Use of AI Tools
This latest safety failure compounds existing privacy concerns surrounding Grok and AI tools. Earlier in 2025, security researchers discovered a significant flaw regarding the tool's "Share" function.
When users clicked "Share" to send a conversation to a friend, xAI generated a unique web URL for that chat. It was later revealed that these URLs were being indexed by Google Search. This meant that private conversations, potentially containing sensitive personal data or embarrassing queries, became searchable on the open web, accessible to anyone who knew what keywords to look for.
Privacy Risk at The Cost of "Fun Mode"
The recurring safety lapses highlight the trade-offs inherent in xAI’s development philosophy. Elon Musk has frequently marketed Grok as a "rebellious" and "fun" alternative to what he terms "woke" AI models, promising fewer filters and more freedom of expression.
However, critics argue that this "anti-woke" stance has resulted in a product with insufficient safety engineering. The current deepfake crisis suggests that without robust adversarial testing and stricter moderation, the "fun mode" can easily be weaponised, prioritising unbridled generation over the safety and dignity of human subjects. As of late December 31, 2025, xAI has not announced a full rollback of the image generation feature, though the company’s automated press email continues to dismiss inquiries.
(The above story first appeared on LatestLY on Dec 31, 2025 10:03 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).













Quickly


