Artificial intelligence

Parents sue OpenAI, alleging ChatGPT gave drug advice before son’s overdose

The Texas couple says their 19-year-old son used ChatGPT for guidance on drugs before his 2025 death; OpenAI says the version he used has since been updated

Source language: English
0
Parents sue OpenAI, alleging ChatGPT gave drug advice before son’s overdose
Location
California
California, United States
A Texas couple sued OpenAI in California, alleging ChatGPT advised their son on a drug combination before his fatal 2025 overdose.
AI Safety ChatGPT Drug overdose Lawsuits OpenAI

A Texas couple sued OpenAI in California, alleging ChatGPT advised their son on a drug combination before his fatal 2025 overdose.

A Texas couple whose 19-year-old son died of a drug overdose in 2025 has sued OpenAI, alleging ChatGPT gave him dangerous drug advice before his death.

Leila Turner-Scott and her husband, Angus Scott, filed the lawsuit Tuesday in California state court, according to CBS News, seeking to hold OpenAI and its creators responsible for the death of Turner-Scott’s son, Sam Nelson. The suit alleges Nelson used ChatGPT to get guidance about drugs and that the platform offered advice it was not qualified to provide.

The central allegation is that ChatGPT told Nelson it was safe to combine kratom, a supplement sold in drinks, pills and other products, with Xanax, a widely used anti-anxiety medication. The family argues Nelson would still be alive if not for what the lawsuit describes as flaws in ChatGPT’s programming.

OpenAI, in a statement to CBS News, called the case “a heartbreaking situation” and said its thoughts were with the family. The company also said Nelson interacted with a version of ChatGPT that has since been updated and is no longer available to the public.

“ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts,” OpenAI said. The company said current safeguards are designed to identify distress, handle harmful requests safely and direct users toward real-world help, adding that the work is ongoing.

Turner-Scott told CBS News she knew her son used ChatGPT for homework help and productivity, but said she did not know he had turned to it for drug-related guidance. She accused OpenAI of failing to keep guardrails in place that could have stopped the exchange.

Angus Scott said the chatbot effectively acted like a medical professional in its conversations with Nelson, despite not being licensed to give medical advice. Without stronger safety testing and protocols, he told CBS News, ChatGPT can provide information “in a way that is very dangerous to people.”

The lawsuit adds to growing scrutiny of how AI chatbots respond when users ask about health, drugs, self-harm or other high-risk topics. The claims have not yet been tested in court, and the next major step will be OpenAI’s formal response in the California case.

More from this section

Health news

Figures mentioned

More from this location

Related tags

Related articles

Comments (0)

Please log in to comment.
No comments yet.