California bill would create a consumer warning to deter deepfakes
California’s SB 11, the AI Abuse Act, would require a consumer warning on AI systems capable of creating deepfake images, audio, or video. (Image by Storme Kovacs from Pixabay)
April 4, 2025 — A new bill making its way in Sacramento would require AI developers and deployers to include a consumer warning in public-facing systems. The warning would remind users about state laws prohibiting the creation and distribution of unauthorized or harmful deepfakes.
California Senate Bill 11, the Artificial Intelligence Abuse Act, would codify the inclusion of computer-manipulated or AI-generated images or videos in the state’s right of publicity law and criminal false impersonation statutes.
Sponsored by Sen. Angelique Ashby (D-Sacramento), the proposal would also require those selling or providing access to technology that manipulates images, video, and audio, to create a warning for consumers about their personal liability if they violate state law.
The proposal received its first hearing earlier this week before the California Senate Judiciary Committee. Transparency Coalition co-founder Jai Jaisimha joined Sen. Ashby in testifying on behalf of the bill in Sacramento.
Sen. ashby: ‘Many have been harmed’ by deepfakes
During Tuesday’s hearing Sen. Ashby noted the real harm done by deepfakes – images, videos, and voice likenesses that are indistinguishable from authentic content.
“The lack of a comprehensive legal framework for addressing deep fakes and non-consensual images and videos is troubling,” she said. “This leaves individuals vulnerable to various forms of exploitation, identity theft, scams, misinformation and misrepresentation of character. Unfortunately, this technology has disproportionately impacted women and young girls … through the creation of sexually explicit photos and videos. Many have been harmed. In one well known case, pornographic AI-generated images created of Taylor Swift gained over 45 million views on social media in a 17-hour period.”
tcai’s jaisimha: warnings can help deter deepfakes
Jai Jaisimha, co-founder of the Transparency Coalition.AI (TCAI), testified in support of the bill.
“AI models are able to produce pornographic deepfakes because model developers have adopted a ‘more is better’ attitude towards acquiring and ingesting training data—including pornographic imagery—without exercising proper care to curate and remove these abhorrent elements,” Jaisimha said in his prepared remarks.
The warnings that would be required under the legislation could actually discourage illegal use of AI deepfakes, he added.
“Published research on anti-piracy warnings and their efficacy has shown that clearer and more explicit warnings can modify user behavior and deter would-be pirates,” Jaisimha said. “While these warnings may not deter a professional pornographer or scammer, they may well deter the average user from creating and spreading this type of content.”
building on california’s strong ‘right of publicity’ law
SB11 would create civil and criminal liability for those who violate the law, and would establish a civil penalty up to $25,000 for each day that false AI-generated content was made public without a consumer warning.
California’s right of publicity law currently makes anyone liable for damages who knowingly uses someone else’s name, voice, image or likeness, to advertise or sell products or services without that person’s consent.
A person or entity who violates the current law is subject to paying actual damages, or statutory damages of $750, whichever is greater, along with surrendering any profits made from the unauthorized use of a person’s likeness. The current law also provides for punitive damages.
Current law also provides that in cases where someone impersonates another person for the purposes of harming, intimidating, threatening, or defrauding another person is guilty of a crime punishable by a fine and or imprisonment.
SB 11 would eliminate a provision in the current right of publicity law that allows an employer to avoid punishment for using an employee’s photograph in cases where the use was incidental and not essential, and if the use of the likeness was not known or intentional.
The bill passed the California Senate Judiciary Committee 12-0 and is next headed to the Senate Committee on Public Safety.