Weeks after a Rancho Santa Margarita family sued over ChatGPT’s role in their teenager’s death, OpenAI has announced that parental controls are coming to the company’s generative artificial intelligence model.
Within the month, the company said in a recent blog post, parents will be able to link teens’ accounts to their own, disable features like memory and chat history and receive notifications if the model detects “a moment of acute distress.” (The company has previously said ChatGPT should not be used by anyone younger than 13.)
The planned changes follow a lawsuit filed late last month by the family of Adam Raine, 16, who died by suicide in April.
After Adam’s death, his parents discovered his months-long dialogue with ChatGPT, which began with simple homework questions and morphed into a deeply intimate conversation in which the teenager discussed at length his mental health struggles and suicide plans.
While some AI researchers and suicide prevention experts commended OpenAI’s willingness to alter the model to prevent further tragedies, they also said that it’s impossible to know if any tweak will sufficiently do so.
Despite its widespread adoption, generative AI is so new and changing so rapidly that there just isn’t enough wide-scale, long-term data to inform effective policies on how it should be used or to accurately predict which safety protections will work.
“Even the developers of these [generative AI] technologies don’t really have a full understanding of how they work or what they do,” said Dr. Sean Young, a UC Irvine professor of emergency medicine and executive director of the University of California Institute for Prediction Technology.
ChatGPT made its public debut in late 2022 and proved explosively popular, with 100 million active users within its first two months and 700 million active users today.
It’s since been joined on the market by other powerful AI tools, placing a maturing technology in the hands of many users…