Two AI transparency bills get first hearing in Washington legislature

Rep. Clyde Shavers, author of HB 1168, testifies before a House committee in the Washington State legislature on Friday, Jan. 17.

Jan. 17, 2025 — Transparency Coalition leaders and other stakeholders testified in favor of two AI transparency bills during their first public hearing in the Washington State House of Representatives earlier today.

HB 1168, which would require developers to publish information about AI training datasets, and HB 1170, which would require the inclusion of discovery tools in AI-generated digital content, were given their initial once-over by members of the House Technology, Economic Development, & Veterans Committee on Friday morning.

The Transparency Coalition has worked closely with the author of the two bills, Rep. Clyde Shavers (D-Island County), as well as co-sponsor and Committee Chair Rep. Cindy Ryu (D-Shoreline), in shaping the bills and advocating for their adoption.

furthering the transparency work started in california

The two bills closely align with the transparency bills adopted by California in late 2024, known as AB 2013 and SB 942. Codifying similar requirements into law in Washington, home of Microsoft, Amazon, and a thriving entrepreneurial tech industry, would protect consumers while establishing a consistent industry standard for all AI developers.

Rep. Shavers offered the measures as balanced proposals that “ensure consistency and clarity for developers operating across state lines.” The goal of the legislation, he said, is to forward the core principle of transparency while not stifling innovation.

The bills, he added, “are positioning Washington as a leader in ethical artificial intelligence development.” They “balance innovation with responsibility, and insure that technological progress serves the interest of all of our citizens.”

TCAI testifies

TCAI Founder Jai Jaisimha testifies before the House committee on behalf of HB 1168 on Jan. 17.

Transparency Coalition Founder Jai Jaisimha appeared in Olympia on behalf of both bills.

“HB 1168 will provide the public with critical insight into which content (text, image, video, audio, or other content) was used to train or refine a generative AI system,” he said during his testimony.

“A bill with the same requirements was heavily debated, negotiated, and passed with significant bipartisan votes and signed into law in California,” he added. “Companies all over the world are preparing to abide by these provisions or taking other curative measures. This bill will provide Washington state residents the same protections as residents of California will begin to experience in 2026.”

On behalf of HB 1170, Jaisimha stressed the need for individuals and policymakers to be able to tell the difference between what is real and what is fake.

“HB 1170 will help bring much needed transparency to generative AI outputs,” he told the committee. “In a world where consumers more often can’t tell if a particular piece of content is real or manipulated by AI, HB 1170 will make things a lot clearer without interfering with the consumer experience.”

“These are not heavy-handed requirements,” he added, “nor will they inhibit business innovation. As a matter of fact, an industry standards body called the Coalition for Content Provenance and Authenticity led by Adobe, Microsoft, Amazon, Meta, OpenAI Google and others has already developed and agreed on standards that are compliant with this bill.”

following through on big tech’s pledge

Tom Kemp, the author, entrepreneur, and tech policy thought leader, added his voice to the hearing yesterday as well.

Kemp, who collaborated on the California version of HB 1170, emphasized the bipartisan nature of the measure. “This bill is largely based on a bipartisan proposal at the federal level, the AI Labeling Act of 2023, co-sponsored by Sen. John Kennedy (R-LA) and Sen. Brian Schatz (D-HI).”

The bill only applies to the largest providers of generative AI, Kemp added, “so it doesn’t stifle start-ups in California or Washington.” It builds on a pledge that major tech companies made in 2023 to “develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance.”

That pledge remains on OpenAI’s own web site. Despite the pledge, the ChatGPT developer has failed to develop or release those mechanisms.

“This bill,” said Kemp, “codifies that pledge,” holding tech companies to their promise.

next steps

The bills’ co-sponsors will now consider revisions and amendments to the measures, with the next look at the proposals expected to come during an executive session of the same House committee on Friday, Jan. 24.

Want to know more? Check out TCAI’s Learn page to find resources about AI, training data, transparency laws, and more.

Next
Next

Analysis: What Anthropic’s deal with music publishers does and doesn’t do