New California bill seeks protections for kids interacting with AI systems
A new bill (AB 1064) introduced yesterday in the California legislature would set standards for developers offering AI systems for use by children. Photo by Annie Spratt on Unsplash.
Feb. 21, 2025 — A new approach to protecting children from harmful interactions with artificial intelligence is at the center of a bill proposed this week in California’s state legislature.
Yesterday, Assemblymember Rebecca Bauer-Kahan (D-Orinda), introduced AB 1064, the Leading Ethical AI Development (LEAD) for Kids Act. The proposal would create a new AI standards board within the state’s Government Operations Agency, and charge its members with evaluating and regulating AI technologies for children. It would also impose a series of checks and balances—with an emphasis on transparency and privacy protections—to ensure only the safest AI tools make it into the hands of children.
The full text of AB 1064 can be found here.
Children are increasingly interacting with new AI-powered educational tools, social media algorithms and chatbots. Bauer-Kahan and child advocates said they are concerned that a lack of oversight is leading to harmful conversations. For example, the National Eating Disorders Association launched a chatbot in 2023 that encouraged youth to lose weight and provided dieting tips instead of promoting a healthy body image.
"AI has incredible potential to enhance education and support children’s development, but we cannot allow it to operate unchecked," Assemblymember Bauer-Kahan said in a release. "Tech companies have prioritized rapid development over safety, leaving children exposed to untested and potentially dangerous AI applications.”
A proposed ‘lead for kids standards board’
The proposed 10-member LEAD for Kids Standards Board would include experts from academia, education, social sciences, technology and artificial intelligence, among others, with the requirement they collectively adopt new regulations governing AI platforms by no later than 2027.
The new legislation would also require developers to assess and label the level of risk that their AI technologies pose to children based on how likely a given tool would be to harm a child, and how severe that harm could be. Risk levels would be ranked as “prohibited risk,” “high risk,” “moderate risk,” and “low risk.”
For example, chatbots capable of manipulating a child, creating an emotional attachment, and simulating companionship would be considered prohibited risk, as would chatbots that collect children’s biometric data, or detect their emotions. In those cases, companies would be required to develop methods to ensure children can’t access their platforms.
AI tools rated as having greater risk would be subject to more strict requirements, including an evaluation process before and after deployment.
A registry for ai aimed used by children
The legislation would also establish an AI registry with detailed information about each product and any relevant information about potential harm they may cause. The registry would be a tool helping the government determine where to target independent audits that would be reported back to the board.
AB 1064 would also require companies that create these AI tools to allow third parties to report incidents of harm caused by their interactive systems back to the developer and the board, all while adding that information to the AI registry’s database.
parental consent required to use a child’s info for training
In an effort to strengthen privacy protections, the proposal includes language that would require written parental consent before a child’s personal information can be used to train an AI platform.
The bill includes some teeth, giving the board authority to pursue a civil penalty of $25,000 when a company has incorrectly classified an AI tool’s risk, but only if the error is not remedied within 30 days. It also codifies the right of families to pursue actual and punitive damages if a child is harmed by an AI tool.
exemption for smaller start-up ai systems
The law would only apply to AI systems with more than 1 million monthly users.
The bill was drafted in partnership with Common Sense Media, a nonprofit working in the interest of the digital wellbeing of children, and with the support of the Transparency Coalition and other child advocacy and AI accountability groups.
“We fully reject the notion that has become popular in some circles today that the race to lead on AI is a choice between being first or being safe,” said James P. Steyer, Founder and CEO of Common Sense Media. “The two must go hand in hand; our kids and teens deserve nothing less than the best technology our country has to offer, with safety and with the knowledge of how to use these powerful tools for education and life.”