Three California Bills Worth Watching Right Now

California’s legislature could establish the early rules for AI training data regulation.

In the scramble to establish legal safeguards around the creation and use of generative AI, policymakers are keeping a close eye on California. And for good reason. OpenAI, Google, Meta, Nvidia, and more than 1,500 other AI-related companies are headquartered in the Golden State, which means regulations enacted in Sacramento could establish the early rules of the game for the entire industry.

That’s why we’re tracking a number of developing bills right now. The language and concepts coalescing over the coming weeks and months will have a profound influence on later legislation in other states and in Congress.

As of early March, California legislators had introduced more than 30 AI-related bills covering everything from deepfakes to medical devices to job security.

Of those, three show the most promise:

AB 2013: AI Training Data Transparency

Primary sponsor: Asm. Jacqui Irwin, D-Thousand Oaks

Irwin’s bill would require the developer of an AI model to post documentation on the company’s website regarding the data used to train the artificial intelligence system.

AB 2930: Impact Assessment for Automated Decision Tools

Primary sponsor: Asm. Rebecca Bauer-Kahan, D-Orinda

Bauer-Kahan’s bill would require AI models to be proven unbiased prior to launch, in part by mandating the performance of an annual impact assessment of the model.

SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Systems

Primary sponsor: Sen. Scott Wiener, D-San Francisco

Wiener’s bill, the most ambitious in scope, would establish a framework for state regulation of large-scale AI models, including a new division (called the Frontier Model Division) within the California Department of Technology (CDT) to certify compliance with safety requirements. 1047 would also create a public cloud computing cluster, called CalCompute, focused on research into the safe and secure deployment of large-scale AI models.

Why do these matter? Because at this stage of the legislative calendar, these bills have a heavier gravity than the rest. That is: A combination of potential benefit, language specificity, and political capital backing them.

Let’s dive into each one.

AB 2013: Daylight the training data

The proposed Training Data Transparency Act (AB 2013) represents one of the first serious efforts to enact safeguards around the data used to train AI models like OpenAI’s ChatGPT, Google’s Gemini chatbot, and Microsoft’s Copilot. Currently the use of that source material is unregulated, which means an AI model could learn equally from accurate information and harmful falsehoods. That can lead to the generation of biased or untrue outputs, hallucinations, and the perpetuation of damaging fabrications.

The bill as introduced is brief—two pages—and will need to be filled out with more specifics as it moves through committee. But its essential idea aligns with our mission here at Transparency Coalition: Move training data into daylight.

AB 2013 would require an AI model developer to post training data documentation on the company’s website. What would that look like? Nobody’s quite sure yet. We are so early in the AI regulation era that this is very much a live issue.

The question is how to strike the right balance between society’s need for “good data” verification and a company’s need to protect its proprietary information. Mandating public access to an AI model’s exact training data, some companies argue, would be like forcing Kentucky Fried Chicken to publish its eleven-herbs-and-spices recipe on KFC.com. Call this the Secret Sauce argument. In some cases, this can be a legitimate concern. Too often, though, it’s used as a broad excuse to paper over the presence of data that is inaccurate, harmful, or illegally obtained. There are plenty of industries that thrive under—and because of—appropriate regulations that require companies to allow audit access to their underlying systems. Think of the financial sector, for example.

The bill’s primary sponsor, Assemblymember Jacqui Irwin, chairs the Assembly’s Select Committee on Cybersecurity, which gives her standing, and she’s a member of the Privacy and Consumer Protection Committee, whose chair (Asm. Rebecca Bauer-Kahan, see AB 2930 below) has made AI safety a primary focus of her committee’s work this session. AB 2013 hasn’t yet been set for a hearing, but we expect AI bills to get airplay and move out of that committee in the coming months.

AB 2013 has not yet been scheduled for a committee hearing.

AB 2930: A civil rights-focused AI impact assessment

The Impact Assessment for Automated Decision Tools Act (AB 2930) is the flagship AI bill introduced by Assemblymember Rebecca Bauer-Kahan, who has emerged as a driving force for AI legislation in the California Assembly. The chair of the Privacy and Consumer Protection Committee released a package of AI-related bills earlier this year, with AB 2930 as her top priority. Other Bauer-Kahan AI proposals include AB 1836 (preventing the exploitation of a deceased person’s intellectual property), AB 2885 (creating a standard definition of AI), and AB 3204 (requiring data-trained models solid in California to register with the state). “Together, the bills create a nation-leading regulatory framework where AI tools are tracked and understood,” Bauer-Kahan said earlier this year.

AB 2930 would require AI developers and deployers (companies using an AI system) to file an annual impact assessment with California’s Civil Rights Department, with an eye toward preventing discrimination and protecting civil rights. That assessment would include a statement of the AI model’s purpose, benefits, intended use, and deployment. The bill would prohibit companies from using an AI system that contributes to differential treatment or the disfavoring of people based on race, ethnicity, sex, religion, age, or other classifications protected by state law.

Bauer-Kahan’s bill expands on the emergence of impact assessments as the go-to accountability measures in AI regulation. The European Union’s groundbreaking EU Artificial Intelligence Act, adopted in late 2023 and expected to become law later this year, includes a requirement that AI developers produce rigorous Fundamental Rights Impact Assessments prior to deployment. In Congress, Sen. Ron Wyden’s (D-OR) proposed Algorithmic Accountability Act would similarly require AI model developers to submit impact assessments to the Federal Trade Commission.  

With impact assessments—long used as a regulatory mechanism in environmental policy and urban planning—the devil lies in the details. Which specific impacts are assessed, who approves and oversees the approval and enforcement process, what a noncompliance penalty looks like: These make the difference between a rigorous accounting and a weak smokescreen.

AB 2930 may be considered by the Privacy and Consumer Protection Committee on March 17.

 

SB 1047: A new state agency for AI regulation

Finally, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (SB 1047) includes the most expansive AI regulatory framework proposed so far in Sacramento. State Sen. Scott Wiener has clearly done his homework, as SB 1047 goes into 19-page detail to create a new state AI regulatory agency (the Frontier Model Division) housed within the California Department of Technology (CDT). Like Bauer-Kahan, Wiener proposes a kind of impact assessment but his would be an “annual certification of compliance” signed by a company’s chief technology officer and submitted to the Frontier Model Division. And there’s more:

1047 would create CalCompute, a public state-owned and state-hosted cloud computing cluster designed to foster greater access and innovation. (Because the computing power needed to develop new AI models is really, really expensive.)

Wiener’s bill arrived with the support of a number of AI policy groups (Center For AI Safety Action Fund, Encode Justice, Economic Security California.AI), Turing Award-winning researchers, AI startup founders, and national security experts. But SB 1047 has a vulnerable Achilles heel: Startup costs. The bill authorizes the new Frontier Model Division to assess certification fees, but with any agency those fees don’t start rolling in until the agency has the staff, office space, and IT systems needed to collect those fees.  

SB 1047 breaks new ground in laying out some of the basic term definitions that will be critically important in the evolution of AI regulation. While most AI bills work with vague and sometimes confused terms, Wiener sets down concrete language that defines “artificial intelligence model,” “critical harm,” “derivative model,” and a new threshold for extremely large AI developers (systems of 1026 FLOPS—floating point operations per second—that cost more than $100 million to train). In a policy space with a steep learning curve, look for SB 1047’s language to turn up in other proposals in other states over the coming months.

SB 1047 will be considered by the California State Senate Judiciary Committee on April 2. 

Previous
Previous

EU approves world's first comprehensive AI law

Next
Next

Seattle tech leaders launch nonprofit to push for greater transparency in AI training data