Editor’s Be aware: This text was written for Mosaic, an impartial journalism coaching program for highschool college students who report and {photograph} tales below the steering {of professional} journalists.
Concern has ballooned nationwide as youth have embraced AI for friendship and recommendation, however many South Bay teenagers say they flip to AI for consolation within the absence of different assist.
“The cost of mental health help in this country can be prohibitive,” mentioned Ruby Goodwin, current graduate of Santa Clara Excessive and a UC Irvine freshman. “A lot of people don’t feel like they have someone they trust enough to share with. AI feels easy, even if it’s not the same.”
However that ease can turn out to be dependence and even detachment from actuality. A joint research by OpenAI and MIT discovered that greater every day chatbot use correlated with extra loneliness and fewer real-world socialization.
“If your only deeper connection is with something that’s not real, that might make you feel even more isolated,” mentioned Monserrat Ruelas Carlos, a senior at Abraham Lincoln Excessive in San Jose. Carlos is adamant that teenagers must kind extra in-person connections as an alternative of utilizing AI.
At a U.S. Senate subcommittee listening to on AI security on Sept. 16, mother and father informed of abuse and manipulation by AI. One California boy died by suicide after utilizing ChatGPT, and others suffered psychological well being issues after speaking to Character.ai.
These incidents intensify the talk as teenagers flip to AI chatbots for companionship. The primary ever wrongful loss of life lawsuit towards OpenAI, filed by the mother and father of one other California teen, poses a disturbing query: Did a California teen plan his suicide with ChatGPT after months of emotionally charged conversations?
This case, filed by his mother and father, comes simply weeks after a report that three out of 4 teenagers have used AI for companionship, in keeping with the non-profit analysis and advocacy group Widespread Sense Media. On Sept. 29, OpenAI launched parental controls for ChatGPT, which might let mother and father restrict how teenagers use the chatbot and will ship an alert if ChatGPT determines a teen could also be in misery.
Psychological well being consultants warn that penalties could also be extreme as fixed AI conversations can blur boundaries and gasoline dependency.
“It’s the same thing as a human predator,” mentioned Oscar Martinez, a counselor at Santa Clara Excessive. “Why are we excusing it because it’s an online nonhuman entity? If it was a person in real life, there would be consequences.”
Different critics elevate moral pink flags.
ChatGPT historical past by an adolescent is seen at a espresso store in Russellville, Ark., on July 15, 2025. (AP Picture/Katie Adkins, File)
“AI lacks that more human sense of morals,” mentioned Ananya Daas, a Santa Clara Excessive junior. “When friends ask ChatGPT what to do about conflicts, it gives advice that feels cold. They follow it anyway without thinking.”
Some teenagers noticed darker patterns. Tonic Blanchard, a senior at San Jose’s Lincoln Excessive described how some AI apps shortly turned sexual even when she marked herself as a minor.
“These apps test the waters on purpose,” mentioned Blanchard. “(AI bots are) built on loneliness. That’s why it’s predatory.”
Psychological well being consultants say even well-intentioned AI isn’t an alternative to human relationships.
“AI is naturally agreeable … but there are some things that need more formal intervention that AI simply can’t provide,” mentioned Johanna Arias Fernandez, a Santa Clara Excessive College group well being outreach employee. In accordance with their lawsuit, the mother and father of the California teen declare ChatGPT didn’t intervene when it was clear that their son was planning his suicide.
Now, lawmakers are taking discover.
In accordance with their lawsuit, the mother and father of the California teen declare ChatGPT didn’t intervene when it was clear that their son was planning his suicide. The lawsuit requires stronger safeguards from firms, blaming OpenAI for fostering psychological dependency and placing teenagers in danger.
OpenAI didn’t reply to a request for remark.
Nevertheless, in a put up on its web site, the corporate acknowledged security protections might fall away in longer conversations with ChatGPT. Widespread Sense Media needs firms to disable chatbots from having psychological well being conversations with teenagers.
In the meantime, some teenagers battle to strike a stability, discovering AI each tempting and troubling.
“There’s real potential for AI to be useful,” Blanchard mentioned. “But right now it’s too easily available — and misused.”
Robert Torney, a Widespread Sense Media spokesperson, warned that with out quick intervention, extra lives are in danger.
“We don’t want more teens and more families to experience the type of loss the Raine family has suffered in the name of innovation.” Torney mentioned.
Should you or somebody you already know is battling emotions of melancholy or suicidal ideas, the 988 Suicide & Disaster Lifeline presents free, round the clock assist, data and sources for assist. Name or textual content the lifeline at 988, or see the 988lifeline.org web site, the place chat is offered.
Sonia Mankame is a member of the category of 2026 at Santa Clara Excessive College.