First 2025 AI transparency bills filed in Washington State

Image by Gerd Altmann from Pixabay

Jan. 8, 2025 — Washington state Rep. Clyde Shavers got a head start on the 2025 legislative session this week by pre-filing two bills aimed at furthering the cause of transparency and safety in artificial intelligence systems.

Shavers’ HB 1168, an AI training data transparency proposal, would require generative AI (Gen AI) developers to post a high-level training data ingredient list when releasing a new or substantially modified system.

The second bill, HB 1170, an AI fair disclosure measure, would mandate the inclusion of an artificial intelligence detection tool within any generative AI system that has more than one million monthly visitors or users.

Each bill amplifies the baseline standards established by California’s AB 2013 and SB 942, the AI transparency bills supported by TCAI and signed into law by Gov. Gavin Newsom last year.

Strong support from TCAI

"We're thrilled to support Rep. Shavers' work on artificial intelligence transparency and safety,” Transparency Coalition Founder Jai Jaisimha said earlier today. “His new bills establish the importance of transparency in AI inputs and outputs. That’s a foundational element of ensuring public trust and safety in the use of Generative AI. Model developers serving Washington state residents need to begin to demonstrate that they understand their duty of care in bringing these products to market.”

The Washington state legislature is scheduled to formally convene next Monday, Jan. 13. Shavers is one of the legislature’s leading voices on artificial intelligence issues. Last year he told the Whidbey News-Times that Washington’s position as a leading technological hub makes it imperative that the state drive the terms of ethical AI development and use.

“It’s important for us to collaborate and to coordinate and to listen to all these experts and the public and try to understand how do we foster this innovation?” he said. “How do we ensure that it’s not being stifled? At the same time, how do we ensure that people are protected?”

More on AI Disclosure and Training data transparency

Today it’s possible to embed provenance in an AI-created image or video. Adobe and other companies are already doing it.

Training datasets are the foundation of artificial intelligence systems. They’re like the ingredients in a food product.

Previous
Previous

Virginia files nation’s first AI ‘Do Not Train’ data bill as legislature convenes

Next
Next

Transparency Coalition makes AI ‘Duty of Care’ laws a top 2025 legislative priority