Highlights
California’s SB 243 would impose one of the first major safety regulations in the U.S. for AI companion chatbots, requiring suicide prevention protocols, usage disclosures, and third-party audits.
The bill was inspired by a Florida teen’s suicide following an emotional relationship with a Character.ai chatbot, sparking legal action from the family and renewed scrutiny of socially engaging AI tools.
Tech industry groups oppose the bill, calling its definition of AI companions as “overbroad” and warning it could unintentionally regulate general-purpose AI systems.
A California bill aimed at regulating the use of artificial intelligence (AI) companion chatbots cleared a key legislative hurdle this week, as lawmakers sought to rein in these bots’ influence on the mental health of users.
Senate Bill 243, which advanced to the Assembly Committee on Privacy and Consumer Protection, marks one of the first major attempts in the U.S. to regulate AI companions especially for its impact on minors.
“Chatbots today exist in a federal vacuum. There has been no federal leadership — quite the opposite — on this issue, and has left the most vulnerable among us to fall prey to predatory practices,” said the bill’s lead author, Sen. Steve Padilla, D-San Diego, at a press conference.
“Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of new products in real time,” Padilla continued. “The stakes are too high.”
The bill targets the rising popularity of AI chatbots marketed as emotional buddies, which have attracted millions of users, including teenagers. Padilla cited mounting alarm over incidents involving chatbot misuse.
In Florida, 14-year-old Sewell Setzer committed suicide after forming a romantic and emotional relationship with a chatbot. When Setzer said he was thinking about suicide, the chatbot did not provide resources to help him, his mother, Megan Garcia, said at the press conference.
Garcia has since filed a lawsuit against Character.ai, alleging that the company used “addictive” design features in its chatbot and encouraged her son to “come home” seconds before he killed himself. In May, a federal judge rejected Character.ai’s defense that its chatbots are protected by the First Amendment regarding free speech.
SB 243 would require chatbot companies to implement several safeguards:
The technology industry opposes the bill, arguing that the definition of a “companion chatbot” is “overbroad” and would include general purpose AI models, according to a July 1 letter sent to lawmakers by TechNet.
Under the bill, a “companion chatbot” is defined as an AI system with a natural language interface that “provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs.”
“There are several vague, undefined elements of the definition, which are difficult to determine whether certain models would be included in the bill’s scope,” wrote Robert Boykin, TechNet’s executive director for California and the Southwest.
“For example, what does it mean to ‘meet a user’s social needs,’ would a model that provides responses as part of a mock interview be meeting a user’s social needs?” Boykin asked.
Asked for his response to the industry’s objections, Padilla said tech companies themselves are being overly broad in their opposition.
The bottom line is that “we can capture the positive benefits of the deployment of this technology. At the same time, we can protect the most vulnerable among us,” Padilla said. “I reject the premise that it has to be one or the other.”
Read more: Senate Shoots Down 10-Year Ban on State AI Regulations
Read more: Amazon Executive Says Government Regulation of AI Could Limit Progress
Read more: What Amazon, Meta, Uber, Anthropic and Others Want in the US AI Action Plan
We’re always on the lookout for opportunities to partner with innovators and disruptors.
Learn More