Gov. Newsom signs California AI Transparency Act into law, a historic first for AI disclosure

California Gov. Gavin Newsom, shown here after signing AI bills into law earlier this week, today added the California AI Transparency Act to his state's list of accomplishments. 

Sept. 19, 2024—In a significant early step for AI safety and transparency, Gov. Gavin Newsom signed the California AI Transparency Act into law earlier this morning.

The Act, authored as SB 942 by state Sen. Josh Becker (D-Menlo Park) with support from noted privacy and AI safety advocate Tom Kemp, creates the nation’s first AI notification standard.

The Act requires AI developers to embed detection tools in the media their AI creates, and to post an AI decoder tool on their website. The decoder tool would allow consumers to upload digital media and discover whether the developer’s AI was used to create that content.

California State Sen. Josh Becker, at left, said:

“It’s crucial that individuals know if content was created by AI or not. SB 942 is a significant advancement over anything that’s come before because it requires large Gen AI companies to both label AI generated content and provide an AI detection capability.

By signing this bill, Governor Newsom is providing Californians with essential tools to navigate the evolving digital landscape and solidifying our position as a leader in enacting sensible AI regulations that protect consumers without stifling innovation.”

High Priority bill for Transparency Coalition

SB 942 was one of the Transparency Coalition’s (TCAI) top legislative priorities in California this year. TCAI, which initially focused on training data transparency, has expanded its focus to include transparency in both AI inputs and outputs as key ingredients in safer and more ethical AI.

“We are excited about the passage of SB 942,” said Transparency Coalition founder Jai Jaisimha. “We supported the AI Transparency Act because it will help bring much needed transparency requirements to generative AI outputs. SB 942 will provide the public with critical transparency into content that was created or modified by a generative AI system. AI has already been implicated in potential harms to civic society and our electoral process and this legislation will go a long way to beginning to address these harms.”

TCAI's Jai Jaisimha testified in favor of the California AI Transparency Act earlier this year. 

Jaisimha testified on behalf of the bill earlier this year. He highlighted some of the proposal’s key elements, including requiring companies to embed latent disclosures into the data associated with the AI-generated content, and meaningful penalties for violation of the Act.

“These are not heavy-handed requirements, nor do we think they’ll inhibit small business innovation,” Jaisimha testified at the time. “The technology behind it is proven. Even OpenAI is supportive of C2PA and has actually started using it for DALL-E, one of their models.”

critical to establishing authenticity

One of the most pressing issues with generative AI text, images, sound, and video, is the increasing inability to distinguish between human-created content and AI-created content.

That inability to distinguish has become critically important in society, as it hobbles the capacity of individuals to evaluate the evidence of their own eyes and ears.

embedding digital serial numbers 

Provenance is a term of rising importance in the AI policy world, and it lies at the heart of the new California law.

Provenance refers to information that describes the origin and history of a piece of digital content—its source, ownership, chain of custody. It’s somewhat akin to a digital serial number.

Provenance can be embedded in a piece of digital content in ways that may be revealed (as a visual watermark, for example) or hidden from clear view. Embedding that information helps establish the authenticity, integrity, and credibility of a piece of digital content.

Next
Next

Former OpenAI, Meta, Google officials urge Congress to adopt AI safety, transparency rules