California Woman Sues OpenAI, Claims ChatGPT Enabled Stalker Ex-Boyfriend To Harass Her

A California woman has sued OpenAI, alleging ChatGPT enabled and amplified a stalking campaign by generating content that reinforced her abuser’s delusions. The case raises critical questions about AI accountability, safety safeguards, and legal liability in the age of generative AI.

ChatGPT (Photo Credits Google Play Store)

A California woman has filed a lawsuit against OpenAI, alleging that its AI chatbot, ChatGPT, played a role in intensifying a stalking and harassment campaign against her. The complaint, filed on Friday, claims the platform generated content that reinforced the alleged abuser’s delusions, contributing to escalating real-world harm.

According to court filings, the plaintiff, identified as Jane Doe, accuses her stalker of using ChatGPT to create fabricated narratives involving her personal life. The lawsuit alleges that by inputting personal details, the abuser was able to generate stories portraying Doe in false and damaging scenarios, including claims of infidelity and criminal behavior.

The filing describes the AI as a “force multiplier,” arguing that its conversational responses appeared to validate the stalker’s beliefs. Unlike traditional online platforms, the lawsuit claims AI-generated responses can feel authoritative and personalized, potentially deepening harmful obsessions. Sam Altman Fires Back at Elon Musk Over ChatGPT Death Remarks; OpenAI CEO Says Tesla Autopilot Resulted in 50 Deaths and Says He Won’t Start on Grok.

A key aspect of the case centers on alleged warnings sent to OpenAI. Doe claims she repeatedly contacted the company, providing evidence such as restraining orders and examples of harmful outputs. Despite this, the lawsuit argues that sufficient safeguards were not implemented to prevent misuse, including blocking prompts referencing her identity.

The case raises complex legal questions about the responsibility of AI developers. Traditionally, platforms have relied on protections under Section 230 of the Communications Decency Act, which shields companies from liability for user-generated content. However, legal experts suggest this lawsuit could test those boundaries, as it argues AI systems actively generate, rather than merely host, content. What Is ChatGPT Prism? Know All About OpenAI’s New Free LaTeX-Native Workspace for Scientific Research and Collaboration.

OpenAI has not yet publicly commented on the case but has previously stated that it employs safety measures, including automated filters and human oversight, to prevent misuse. The company maintains that its AI systems are continuously updated to improve safety and reduce harmful outputs.

The lawsuit also highlights broader concerns about “AI-enabled stalking,” with experts warning that generative tools could be misused for harassment, misinformation, and targeted abuse. As artificial intelligence becomes more integrated into daily life, this case could set a precedent for how courts define accountability in the age of generative AI.

If the case proceeds, it may shape future regulations and establish clearer expectations for how AI companies balance innovation with user safety.

Rating:3

TruLY Score 3 – Believable; Needs Further Research | On a Trust Scale of 0-5 this article has scored 3 on LatestLY, this article appears believable but may need additional verification. It is based on reporting from news websites or verified journalists (Techcrunch ), but lacks supporting official confirmation. Readers are advised to treat the information as credible but continue to follow up for updates or confirmations

(The above story first appeared on LatestLY on Apr 11, 2026 10:29 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).

Share Now

Share Now