Share and Follow
California Gov. Gavin Newsom (D) vetoed a bill Monday that would have restricted children’s access to artificial intelligence (AI) chatbots while signing another that places guardrails on how chatbots interact with kids and handle issues of suicide and self-harm.
Governor Newsom expressed his support for protecting minors using AI but highlighted concerns in his message to lawmakers. He noted that A.B. 1064’s wide-ranging limitations could inadvertently result in a complete ban on minors accessing these technologies. Instead, he approved S.B. 243, which introduces specific measures focusing on the interaction of “companion chatbots” with children.
“While I strongly support the author’s goal of establishing necessary safeguards for the safe use of AI by minors, AB 1064 imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors,” the governor wrote in a message to state lawmakers.
Furthermore, the bill requires that chatbots display “clear and conspicuous” notifications to users, making it evident that they are interacting with an AI and not a human, to prevent any potential misunderstandings.
It also requires chatbots to issue “clear and conspicuous” notifications that they are artificially generated if someone could reasonably be misled to believe they were interacting with another human.
When interacting with children, chatbots must issue reminders every three hours that they are not human. Developers are also required to create systems preventing their chatbots from producing sexually explicit content in conversations with minors.
“Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement.
“We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability,” he added. “We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”
In his veto message on A.B. 1064, Newsom noted that he plans to build on S.B. 243 to develop legislation next year “that ensures young people can use AI in a manner that is safe, age-appropriate, and in the best interests of children and their future.”
The family of a California teenager sued OpenAI in late August, alleging that ChatGPT encouraged their 16-year-old son to commit suicide. The father, Matthew Raine, testified before a Senate panel last month, alongside two other parents who accused chatbots of driving their children to suicide or self-harm.
OpenAI on Monday praised S.B. 243’s signing as a “meaningful move forward when it comes to AI safety standards.”
“By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,” a spokesperson said in a statement.
Growing concerns about how AI chatbots interact with children prompted the Federal Trade Commission (FTC) to launch an inquiry into the issue, requesting information from several leading tech companies.
Sens. Josh Hawley (R-Mo.) and Dick Durbin (D-Ill.) also introduced legislation late last month that would classify AI chatbots as products in order to allow harmed users to file liability claims.
S.B. 243 is the latest of several AI and tech-related bills signed into law by Newsom this session. On Monday, he also approved measures requiring warning labels on social media platforms and age verification by operating systems and app stores.
In late September, he also signed S.B. 53, which requires leading-edge AI models to publish frameworks detailing how they assess and mitigate catastrophic risks.
Updated at 8:59 a.m. on Oct. 14