Legislative update: See which AI bills are flying, which are dying as the clock ticks on
It’s mid-April and the legislative season is winding to a close in many states. Meanwhile, AI-related bills are just getting started in states with longer timelines such as California, New York, and Texas.
At the Transparency Coalition we’re supporting a number of bills that advance AI transparency and security across a handful of critical states.
Here’s a brief update on where those efforts stand.
Top ai bills across the USA: April 2025
california
California’s legislative hearings are now in full swing and TCAI is in Sacramento to offer expert testimony and advocate on behalf of a number of AI transparency bills. These are the top AI bills currently in play:
Sen. Josh Becker’s SB 468 would require deployers of AI systems that process personal information to take steps to secure that information. The bill would bring AI systems and their deployers in line with existing state and federal laws regarding the secure handling and protection of personal information.
SB 468 was referred to the Senate Judiciary Committee and is scheduled for its first hearing on April 22.
SB 11: Artificial Intelligence Abuse Act
Sponsored by Sen. Angelique Ashby (D-Sacramento), SB 11 would codify the inclusion of computer-manipulated or AI-generated images or videos in the state’s right of publicity law and criminal false impersonation statutes. The bill would require AI developers and deployers to include a consumer warning in public-facing systems. The warning would remind users about state laws prohibiting the creation and distribution of unauthorized or harmful deepfakes.
SB 11 received its first hearing on April 1 before the Senate Judiciary Committee. Transparency Coalition co-founder Jai Jaisimha joined Sen. Ashby in testifying on behalf of the bill in Sacramento. We have full coverage of the bill and the hearing here.
and
SB 813: Standards for Third Party AI Auditors
These two bills offer separate takes on the pressing need for trusted third-party auditors to act as assurance agents in AI, just as auditors and accounting firms do in the financial world.
AB 1405, co-authored by Assm. Rebecca Bauer-Kahan (D-Orinda) and Sen. Scott Wiener (D-San Francisco), would establish an enrollment process for AI auditors within the Government Operation Agency. The bill neither mandates audits nor prescribes how audits are to be carried out. Instead, it creates a publicly accessible repository of AI auditors and requires that they adhere to minimum standards of transparency, confidentiality, and ethical conduct. It also provides for whistleblower protections in certain cases.
A full analysis of AB 1405 is available here. The bill was approved by the Privacy and Consumer Protection Committee on April 1 and now sits with the Assembly Appropriations Committee.
SB 813, sponsored by Sen. Jerry McNerney (D-Stockton), designates the California Attorney General to establish a process to certify private third-party MROs (multistakeholder regulatory organizations) in the AI space. These MROs would operate as the AI equivalent of financial auditors or accounting firms, ensuring that an MRO-approved AI model and/or application appropriately mitigates specific high-risk impacts. Those safety risks include cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.
Liability coverage: The bill would provide that in a civil action asserting claims for personal injury or property damage caused by an AI model or application, it is an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiff’s injuries.
SB 813 sits with the Senate Judiciary Committee but has not yet been scheduled for a hearing.
California’s AI Copyright Act is aimed at increasing transparency around the use of copyrighted materials to train generative artificial intelligence (GenAI) systems and models.
The bill, authored by Assemblymember Rebecca Bauer-Kahan (D-Orinda), was approved by the Privacy and Consumer Protection Committee on March 18, following testimony from TCAI’s Jai Jaisimha and SAG-AFTRA’s Joely Fisher. It now awaits an as-yet unscheduled hearing by the Assembly Judiciary Committee.
AB 316: Artificial Intelligence Liability
AB 316, sponsored by Assm. Maggie Krell (D-Sacramento), would prevent AI developers and deployers from escaping liability claims by saying, essentially, “the machine acted on its own.”
The bill establishes that in civil actions, where a plaintiff alleges harm caused by AI, AI developers and users are prohibited from asserting that the AI acted autonomously as a defense. An analysis of the bill is available here.
The bill was approved by the Assembly Privacy and Consumer Protection Committee on March 25, but was re-referred to the same committee for further consideration.
AB 1064: Leading Ethical AI Development (LEAD) for Kids Act
This bill from Assm. Rebecca Bauer-Kahan (D-Orinda) would create a new AI standards board within the state’s Government Operations Agency, and charge its members with evaluating and regulating AI technologies for children. It would also impose a series of checks and balances—with an emphasis on transparency and privacy protections—to ensure only the safest AI tools make it into the hands of children.
TCAI has full coverage of the bill here.
nebraska
LB 504: Age-Appropriate Online Design Code Act
This bill, sponsored by Sen. Carolyn Bosn (R-Lincoln), is in TCAI’s estimation the strongest of the four “digital kids” bills introduced by Gov. Pillen in January.
LB 504 contains several provisions to protect Nebraska’s children from the harms of social media and other online services. The bill requires social media and other online services to include design features that prevent compulsive usage of the product, severe psychological harm such as anxiety and depression, severe emotional harm, identity theft, and privacy violations. The Age-Appropriate Online Design Code Act also requires these services to provide parents the ability to manage their child’s privacy and account settings and allow parents the ability to restrict the hours of use of these services. Importantly, the bill restricts the ability to send push alerts during hours children are in school or sleeping.
After vigorous debate on the floor on Feb. 26, LB 504 moved forward and reached the “Select File” stage on March 4.
In Nebraska, Select File is the second debating and voting stage. Bills on Select File may be indefinitely postponed or advanced to the next stage. After Select File, bills are readied for a final reading and floor vote.
Nebraska’s unicameral legislature is scheduled to adjourn on June 9.
new mexico
HB 60: Artificial Intelligence Act
HB 60, Rep. Christine Chandler’s Artificial Intelligence Act, was one of the most promising AI bills of 2025. The measure made significant progress this session but stalled in the House in early March.
Ultimately, the bill did not receive a floor vote prior to New Mexico’s legislature adjourning on March 22.
new york
A6578: The AI Training Data Transparency Act
Assemblymember Alex Bores (D-Manhattan) introduced the Artificial Intelligence Training Data Transparency Act, which requires AI developers to clearly post on their website information about the data they used to train the generative AI model or system.
The bill would require developers to include information about training data, including: the sources or owners of the data, descriptions of all data points, whether the datasets were purchased or licensed, and whether they contained personally identifiable information on consumers.
A6578 currently sits with the Assembly Science and Technology Committee.
A6540: The Stop DeepFakes Act
Assemblymember Bores’ Stop Deepfakes Act is among the top priorities for the Transparency Coalition in New York. The proposal would require AI-generated or AI-altered images, videos, and other media to be embedded with a label listing their provenance data, which is data that records the origin or history of the digital content.
The label would act as a credential that discloses to users the origin of the content they’re consuming, along with its authenticity. The idea is that it’s easier to prove what is real material than it is to identify every AI-generated deepfake on the internet. It would also require the labeling of fake content created by AI, so users could identify it for what it is, along with identifying the AI system that generated it.
Bores’ bill would use the provenance labeling standard created by the Coalition for Content Provenance and Authenticity (CCPA) which is emerging as an industry standard that has been adopted by Adobe, Amazon, Google, Meta, Microsoft, and OpenAI.
The bill would also require social media platforms, including Facebook, Instagram, and others, to preserve the original provenance of any AI material uploaded to their sites.
A6540 currently sits with the Assembly Science and Technology Committee.
A6453: The Responsible AI Safety and Education (RAISE) Act
Another bill Bores introduced last month is the RAISE Act, which would implement a number of safety and security requirements before large AI developers could launch their platforms in New York State.
For example, AI developers would be required to create a written safety and security protocol and publish it and provide a copy to the state attorney general.
AI platforms would also have to conduct an annual review of any safety and security protocol needs that may have emerged and continually update their protocol policy.
It would also require large developers to retain a third-party auditor to review their compliance with the law on an annual basis, which must be posted conspicuously and submitted to the attorney general.
A6453 currently sits with the Assembly Science and Technology Committee.
The New York State legislature is scheduled to adjourn on June 12.
texas
The Texas Responsible AI Governance Act (TRAIGA) underwent a dramatic change last month, as we documented in our Revised Guide to TRAIGA, below. The bill’s author, Rep. Giovanni Capriglione, filed a substantially revised version of TRAIGA, the Texas Responsible AI Governance Act, on Friday, March 14.
The bill received its first hearing on March 26 and was approved by the Delivery of Government Efficiency Committee on April 2.
washington
SB 5708: Protecting Washington Children Online Act
This was a strong bipartisan bill that would start protect kids from the addictive nature of algorithm-driven apps. The bill, sponsored by Sen. Noel Frame (D-Seattle), would require operators of addictive online sites or applications to impose new broad protections for children and fundamentally change the way they consume online content.
The proposal would also create notification-free time periods during school hours and at night. Push notifications to minors would not be allowed during those times.
SB 5708 was well-liked by legislators on both sides of the aisle but ultimately took a back seat to the ongoing budget battle between legislators and Gov. Bob Ferguson. April 2 was the cutoff date for bills from the opposite house of origin to pass out of committee. SB 5708, which was approved in a Senate floor vote in March, did not make it out of the House Consumer Protection & Business Committee.