California’s AI Working Group Report is good. Here’s how it could be better

California’s AI Working Group asked for suggestions. We’re here to offer them. (Photo: Mark Neal / Unsplash.)

In Sept. 2024, Gov. Gavin Newsom announced the formation of a new task force known as the Joint California Policy Working Group on AI Frontier Models. In the wake of Newsom’s veto of SB 1047, the controversial AI regulatory bill, the governor created the committee of experts to come up with a better plan for AI safeguards.

Late last month the committee released its initial draft report. It’s a pretty good start, with transparency as a foundational element in the working group’s framework.

The structure of the Draft Report provides valuable input on the key elements policymakers should consider when crafting successful AI policy. The authority behind the Draft Report ensures that it will be taken seriously by all stakeholders.

This initiative is well-aligned with TCAI’s core competencies of AI policy advocacy and lawmaker education, and we applaud the cornerstone role that AI transparency plays in the report. 

Upon the draft report’s release, the working group requested suggestions for making the report even better. The Transparency Coalition was among a number of groups and individuals who responded by the April 8 deadline.

Here are our suggestions for improvement, formatted according to the questions asked by the Working Group.

  1. What types of questions are you most
    interested in?

Given the core role that incentives play in the Draft Report, we’re most compelled by the questions surrounding a potential incentive-based framework for AI governance.

The AI industry is vested in blocking any regulation, ensuring that nothing will be done without considering incentives more directly.

We propose including more concrete research and proposals on successful incentive structures from other industries in order to understand how this critical component could apply to the AI industry.

1a. What further questions should be asked?

Other questions that should be raised include:

  • What is the current state of the art in AI transparency assessment?

  • What practices are companies currently following without being transparent, and what are the challenges to being transparent?  

  • Why restrict mandatory reporting to "adverse events"? Why not have random sampling of deployed AI systems to protect against such events?

  • How can those "adverse events" be valued and compared against industry profits?

  • Can we point to concrete harms in California directly attributable to lack of transparency?
     

2.  what key factors do you see affecting California’s path forward in AI governance? 

AI developers are resistant to regulations that require any form of transparency regarding their data collection and processing practices, model outputs, and performance evaluations of their models.

Lack of transparency in other industries (tobacco, energy, automotive, etc.) has historically been a detriment to public safety. The Draft Report correctly draws analogies between AI and these industries, thereby making the case for increased AI transparency.

The Report also highlights the need for third party verification of any safety claims made by AI developers.
 
In 2024, as noted in the Report, California enacted two important laws defining initial transparency requirements for generative AI inputs and outputs. Several bills (AB 412, AB 1405, SB 813 and AB 1064) in California’s 2025 calendar build on these first steps, combining the need for
transparency with targeted audits, and they deserve due consideration from the legislature.
 
One key issue affecting AI governance in California is the lack of meaningful incentives for AI developers to employ appropriate care building models used by California residents. In other industries, product liability and tort laws have ensured product developers understand the consequences of releasing unsafe products to market.

Product liability legislation (with limitations of liability for developers who exercise a duty of care) can create a stronger and more vibrant start-up environment. Currently, only the largest companies have the resources to build “one size fits all” models which ingest and process vast amounts of data, including copyrighted, personal, and other questionable data.

With appropriate safeguards and liability scope in place, a) large model developers would be mandated to obtain appropriate data licenses and remove harmful data; b) large model developers would be incentivized to test models more comprehensively to limit or eliminate harms, consequently leading to smaller and safer models; c) smaller start-ups would be able enter the market with smaller, task-specific offerings; and d) California residents and commercial deployers would benefit from a reduced risk of potential harms.

Bills currently under consideration in Sacramento—SB 813, SB 11, AB 1405 and AB 316—create the foundation of an AI model product liability framework, and we hope these bills will be further strengthened and enacted in this calendar year.

3. where are the gaps in the scientific consensus around ai? How can we bridge those gaps?

Effective governance of any technology requires a clear understanding of performance measurement, data gathering, and trends established over time. AI is no different, especially given that consensus on such metrics is still evolving.
 
TCAI is concerned about this current lack of consensus on many foundational performance and attribute measurements, which include model drift calibration, synthetic data management and identification, and utility versus cost capture across myriad disparate use cases.
 
More immediately, how can we quantify and value the impacts, both good and bad, that AI is having on society, including at least privacy, intellectual property, and security? For the AI supply chain, what criteria could be used to say the supply chain has been secured and does not present risk?  
 
Additional, though by no means exhaustive, concerns include:

  • How is algorithmic bias measured? What steps can be taken to mitigate it?

  • How can we establish benchmarks that represent real-world, broad-impact performance?

  • What should be audited?

4. What could be done to leverage frontier AI for the benefit of all Californians?

The well-being of Californians is threatened by increased budgetary and societal pressures, exacerbated by several core problem areas, including economic disparity, ailing infrastructure and urgent environmental issues.

Using AI to find solutions to these complex problems will require disparate and trusted training data sets, coupled with policies and programs that empower Californians to leverage AI effectively.

Foundationally, such policies should require training data provenance and AI decision transparency, supported by AI skills education and training for policy decision makers and the general public. The resulting trust, reliability, and knowledge will ensure Californians can fully leverage the benefits of AI. 

Specific recommendations include:

  • Clear labeling of AI capabilities and appropriate use cases  

  • Disclosure of AI use to California consumers

  • Mandates for AI tools education and curriculum standards set by the California Department of Education

  • Training and support for small businesses, essential services providers, and governmental agencies

  • Mandates for AI usage in legislative and government operations 

  • PSAs and AI evangelism programs

5. list any further resources that might be helpful to the working group.

Here are some TCAI resources for understanding AI transparency, disclosure, and duty of care:




Previous
Previous

Nebraska advances bills to protect kids from AI, social media, and addictive apps

Next
Next

Five AI policy takeaways from the new Stanford HAI Index Report