World News

Woman says chatbot pushed her son to kill himself; ‘guardrails’ are important

As the mother of a teenage boy who took his own life after using a chatbot, Maria Raine said she faced endless grief.

“Losing is never easy,” she said, “but I have to stand up for him.”

So on Monday, he spoke before a crowd of reporters about taking control of the human-like computer programs his son once confided in.

“We have to have monitors on these products,” Raine said at a news conference Monday in Sacramento.

The legislation, Assembly Bill 2023 and state Senate Bill 1119, would require operators of so-called companion chatbots to conduct and document comprehensive risk assessments each year to identify risks to children posed by the product’s design or configuration. Employees will submit to an independent audit of their compliance with those regulations, and the auditor will submit a report to the attorney general. The bills would authorize public prosecutors to use this measure for civil actions.

A companion chatbot is a computer program that simulates human conversations to provide users with entertainment or emotional support. It can also retrieve and summarize information, and many students use technology to help with studying or homework.

“This technology is new, but anecdotal and scientific evidence continues to show that the impact of these interactions between chatbots and users, especially young people, can be very dangerous,” said Sen. Steve Padilla (D-Chula Vista), who introduced the bills along with Assembly members Rebecca Bauer-Kahan (D-Orinda) and D-Oakland (D-Oakland).

“The accompanying chatbots don’t have the same power of empathy as a human being,” Padilla said, “yet the type of technology can create this impression.”

The law will also require operators to provide “clear referrals” to crisis services if a child expresses suicidal ideation or intent to harm themselves. If that child’s account is linked to the parent’s account, it will direct the operator to notify the parent within 24 hours.

Raine and her husband, Matthew Raine, spoke to Congress last year and said their son Adam had shared suicidal thoughts with ChatGPT, a popular chatbot designed by OpenAI. Matthew said the chatbot had discouraged Adam from confiding in his parents and said he was going to write him a suicide note. Adam died by suicide shortly thereafter, on April 11, 2025.

On Monday, Bauer-Kahan said that cyber security is an issue that crosses national and party lines.

“It doesn’t matter if you’re a Democrat or a Republican or you’re from California or Louisiana,” she said, “if these conversations are in the hands of your children, you want them to be safe.”

Keeping children and young people safe on social media or while using artificial intelligence is a hot topic across the country. A landmark ruling last month in Los Angeles County Superior Court could reshape how technology companies are held accountable for harming children with their products. The judges found that Instagram and YouTube are responsible for designing platforms designed to attract young users.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button