Washington State AI Task Force raises AI transparency, accountability as priorities for 2025 session

Washington State Gov-elect Bob Ferguson originally sponsored the AI Task Force last year while serving as Attorney General, and is expected to advocate for its recommendations in Olympia starting later this month.

Jan. 2, 2025 — Washington State’s AI Task Force released its first report earlier this week, with strong recommendations to require transparency and accountability in the development of artificial intelligence systems.

“Transparent AI systems, which are understandable and accessible to users, developers, and regulators, are a cornerstone of trustworthy AI,” the report stated. “When AI systems are transparent, they are understandable, manageable, and accountable.”

The AI Task Force inaugural report underscores the need for state legislation to establish a framework  for AI transparency and accountability rules that protect society while encouraging innovation.

The task force members wrote:

The initial focus of the Task Force will be to establish a set of principles that define the essential elements of trustworthy AI. Trustworthy AI principles are crucial for policy development to provide a framework that helps governments, organizations, and institutions establish rules, regulations, and strategies that guide the responsible and beneficial development and use of AI, minimize harm, and respect fundamental rights.

Trustworthy AI refers to AI systems that are designed, developed, and deployed in a manner to be transparent, reliable, ethical, and aligned with human values. This ensures these systems can be trusted by users, stakeholders, and society to perform tasks in a way that is equitable, accountable, and beneficial. 

The Task Force also recommended closing a loophole in SHB 1999, a law passed last year addressing AI-generated child sexual abuse material (CSAM). That bill, sponsored by Rep. Tina Orwall (D-Des Moines), expanded existing criminal definitions to include AI-generated material. The loophole closure would remove language that requires CSAM to involve an “identifiable” child in the content. That language, the report concluded, “makes the law more difficult to enforce and is counter-productive to efforts to protect children.”

sets the stage for 2025 session, opening Jan. 13

The report’s release comes two weeks prior to the opening of the 2025 session, which convenes on Jan. 13 in Olympia.

Transparency Coalition leaders are currently working with state lawmakers to craft bills that meet the Task Force’s call. A number of proposals are expected to build on the foundation established by California’s AI Transparency Act (SB 95) and AI Training Data Transparency Act (AB 2013). Those laws, enacted in late 2024 with TCAI’s collaboration and backing, established the nation’s first legal requirements for AI disclosure and training data transparency.

task force: much has changed in one year

The Washington State Artificial Intelligence Task Force was established by state legislators in March 2024 at the request of state Attorney General Bob Ferguson in a bipartisan bill sponsored by Sen. Joe Nguyen (D-White Center) and Rep. Travis Couture (R-Allyn). The task force regularly convenes technology experts, industry representatives, labor organizations, civil liberty groups and other stakeholders to discuss AI benefits and risks and make recommendations to the Legislature.

In early 2024, Transparency Coalition co-founders Rob Eleveld and Jai Jaisimha worked with legislators in Olympia to include the issues of transparency and training data in the AI Task Force’s establishing charter. At that time few outside the tech community knew anything about training data. Since then, thanks in part to TCAI’s advocacy, policymakers and the public have become aware of the critical role training data plays in the creation of ethical, accurate, and safe AI systems.

download the full report

Click on image to download the full report.

Previous
Previous

Transparency Coalition makes AI ‘Duty of Care’ laws a top 2025 legislative priority

Next
Next

OpenAI fails to deliver the opt-out tool it promised