Transparency Coalition weighs in on California’s SB 1047: Watchdog or lapdog?
This Transparency Coalition op-ed is co-published with Tech Policy Press, a nonprofit media venture dedicated to promoting new ideas, debate and discussion at the intersection of technology and democracy.
When California state Sen. Scott Wiener introduced SB 1047 a little more than six months ago, we at the Transparency Coalition saw it as a fair-minded effort to create safeguards around the world’s largest AI models. Today, we view SB 1047 as a cautionary lesson: This is what happens when the tech lobby de-fangs a bill and turns a watchdog into a lapdog.
In its original form, back in February, Wiener’s bill failed to address issues around training data and AI transparency—our group’s focus—but we nonetheless admired its core tenets.
The proposal required developers to ensure their AI models could not create critical harm, for the first time establishing a duty of care within the AI industry. And the bill had teeth. A new state agency (the Frontier Model Division) would enforce SB 1047’s legal requirements. The state attorney general would have the ability to sue negligent companies.
Earlier this month, as the bill made its way toward its final floor votes in Sacramento, pressure from the tech and business lobby forced Sen. Wiener to effectively erase SB 1047’s enforcement mechanisms. The Frontier Model Division disappeared. Requirements became suggestions.
Most significantly, SB 1047 no longer allows the state attorney general to sue AI developers for negligent safety practices before something goes wrong. The AG can only sue the developer after a model or service causes harm. That’s like holding an offshore oil drilling company accountable for its negligent practices only after a disastrous oil spill. It does nothing to ensure the safety of its drilling to prevent the spill in the first place.
Despite our serious concerns, should SB 1047 pass, we hope it creates an initial foundation upon which California policymakers will continue to build, creating a nurturing environment for the world’s most innovative, transparent, and ethical AI industry. But our fear is that enacting SB 1047 in its watered-down form will only allow lawmakers in Sacramento to ignore the urgent need for AI regulation in future sessions, under the misguided belief that SB 1047 ‘took care of all that.’
Other bills focused on AI (such as AB 2013, which we support and is also up for a vote in the CA Senate) build on existing legal provisions in California law to regulate AI by adding transparency requirements for training data used to develop AI models. Industry practices in the collection and exploitation of training data have been linked to harms such as hallucinations, mis/disinformation, and violations of user privacy.
There’s an old saying in sports. When it comes to defending against transcendent athletes, you can’t stop them; you can only hope to contain them. By dismantling SB 1047’s requirements and enforcement mechanisms, the tech lobby hasn’t fully defeated the bill but has severely weakened it. And that, sadly, may be enough to contain its ability to have any real impact on AI safety.