Big Tuesday in Sacramento: TCAI testifies on California AI bills in committee hearings
Transparency Coalition Co-founder Rob Eleveld testifies on behalf of Sen. Angelique Ashby’s SB 11, protecting against harmful deepfakes, before a California State Senate committee earlier today.
April 22, 2025 — The Transparency Coalition’s team of AI experts showed up in the thick of the AI policy conversation in Sacramento today, testifying and offering technical expertise to three Senate committees considering three separate AI-related bills.
As the home of Silicon Valley and countless AI start-ups, California continues to lead the nation on tech policy and issues of AI transparency, security, and disclosure.
The three bills on the docket today sit squarely within TCAI’s wheelhouse in terms of issue engagement and technical expertise. They include:
SB 11, Sen. Angelique Ashby’s bill that offers protections against harmful AI-generated deepfakes.
SB 833, Sen. Jerry McNerney’s bill that ensures human oversight of AI systems in areas of critical infrastructure.
SB 468, Sen. Josh Becker’s bill to protect personal information within AI systems.
We have full coverage of the bills and the day’s hearings below.
SB 11: the artificial intelligence abuse protection act
Sen. Angelique Ashby, above, has crafted a bill to reduce the creation of harmful AI-generated deepfakes and offer legal recourse to those personally impacted by the misuse of GenAI technology.
Senate Bill 11 would codify the inclusion of computer-manipulated or AI-generated images or videos in the state’s right of publicity law and criminal false impersonation statutes.
Sponsored by Sen. Angelique Ashby, the proposal would also require those selling or providing access to technology that manipulates images, video, and audio, to create a warning for consumers about their personal liability if they violate state law.
“The rise of artificial intelligence presents great opportunities,” Ashby told the Senate Committee on Public Safety earlier today. “However, there is a lack of legal framework for addressing deepfakes and nonconsensual images and videos. This leaves individuals vulnerable for various forms of exploitation, identity theft, scams, misinformation, and misrepresentation of their character.”
Ashby noted that harmful AI deepfakes disproportionately affect women and girls. “Of all the deepfake videos, 95 percent are sexually explicit and feature women who did not consent to their creation,” she said. “While these deepfakes often target public figures, easily accessible AI software allows users to create non-consensual content of anyone.”
SB 11 would address the misuse of AI technology by:
Clarifying the existing definition of ‘likeness’ under state law to include AI-generated content
Requiring consumer warnings on AI software
Establishing violations for the misuse of AI technology
Preventing AI-assisted evidence tampering in the courts
Transparency Coalition Co-founder Rob Eleveld testified at the hearing, noting that “deepfakes have quickly become one of the most tangible and widespread harms of AI.”
“These deepfakes can be generated by literally anyone with no real knowledge needed,” he added. “Girls in high schools across the country are being scarred in their youth by fake nudes and fake pornographic videos with their likenesses. Today, victims have no legal recourse, and there are no penalties or accountability to discourage these abuses.”
Harveer Saini, a high school junior and co-founder of the National AI Youth Council, also offered compelling testimony on behalf of the bill.
“As a student, I’ve seen how this technology can affect people’s lives in devastating ways.”
Harveer Saini, co-founder, National AI Youth Council, testifying earlier today.
“A lot of people don’t realize that deepfakes are incredibly easy to make, even for students my age,” Saini said. “53% of young people age 13 to 20 reported that they found tools through an online search engine. And that’s why in 2025, one in ten teenagers age 13 through 17 said they personally know someone who has been the target of nude deepfake imagery.”
“One of my closest friends fell victim to AI-generated impersonations online. It was spread for weeks. The emotional toll this had on her was overwhelming. I watched her fall into severe depression. And I was powerless to do anything to help.”
SB 11 was approved by the committee 6-0 and next moves to the Senate Appropriations Committee.
SB 468: cybersecurity upgrade for AI
SB 468 would require deployers of AI systems that process personal information to take steps to secure that information. The bill would bring AI systems and their deployers in line with existing state and federal laws regarding the secure handling and protection of personal information.
Sen. Josh Becker presented his bill to the Senate Judiciary Committee earlier today: “SB 468 insures that businesses using high-risk AI systems to process personal data have strong security measures in place.”
“These life-altering systems handle vast amounts of personal data.”
Sen. Josh Becker, testifying on behalf of his bill SB 468 earlier today.
“Artificial intelligence is used to make life-altering decisions in areas like employment, housing, and education,” he added, “and these systems handle vast amounts of personal data.”
“Unlike traditional data systems, they bring unique security vulnerabilities. Attackers can poison training data to manipulate outcomes or use model inversion to extract personal data details,” he said.
TCAI’s Steve Wimmer, above, testified to the need for upgraded security measures to protect personal data processed by AI systems. SB 468, authored by Sen. Josh Becker, was considered at a hearing of the California Senate Judiciary Committee earlier today.
Steve Wimmer, technical and policy advisor to the Transparency Coalition, testified that current legal measures aren’t sufficient to safeguard against the unique vulnerabilities that AI systems present. “These AI systems use vast amounts of data and have an amazing ability to connect the dots in ways that are new and exciting,” he told the committee. “But when coupled with consumer information, they represent big targets for bad actors. Because they learn over time, they can be coaxed into behaving in unexpected ways by new data and new interactions, leading to incorrect or misleading responses.”
“We have information security best practices and technologies that have been proven to secure other types of software systems,” Wimmer added. “Things like HIPAA specifically in health care, and SOC-2 for software systems as a whole. We should use them for AI systems too, and test these systems so they are working as intended. These are common sense best practices that do not represent an undue burden on the deployers of these powerful systems.”
SB 833: human oversight of Ai in critical infrastructure
SB 833, authored by Sen. Jerry McNerney, ensures that human oversight is retained when AI systems are used to control critical infrastructure. The bill specifically covers state agencies in charge of critical infrastructure, such as transportation, energy, food and agriculture, communications, financial services, or emergency services.
“SB 833 closes a serious gap in AI safety,” Sen. McNerney told the Senate Committee on Government Organization earlier today. “There’s no standard approach to monitoring AI systems that control our critical infrastructure. SB 833 addresses that by putting a human being in the loop. We want AI to be involved in these systems, but we want a human being in the loop controlling the ultimate decision making. That’s what 833 does, it keeps a human in the loop while ensuring innovation happens responsibly when lives and livelihoods are on the line.”
TCAI’s Rob Eleveld also appeared on behalf of SB 833, noting that the bill would go a long way toward addressing a glaring gap in oversight and improving the safety of the state’s critical infrastructure.
“The bill is limited to state agencies and critical infrastructure, it’s not overreaching,” Eleveld noted, “while it also aligns with the Governor’s Executive Order on AI.”
SB 833 passed out of committee on a vote of 12-0. It now moves to the Senate Appropriations Committee.