Transparency Coalition makes AI ‘Duty of Care’ laws a top 2025 legislative priority

Duty of care laws codify the long-held legal principle that one party must take care to not harm another.

At TCAI we’re starting the new year with a bang: We’ve expanded our priorities to advocate for the adoption of strong, sensible product liability laws that encompass artificial intelligence systems.

That doesn’t lessen our commitment to AI transparency and safety. Transparency is a critical component in the duty of care AI developers must exercise in the manufacturing of safe and reliable products. The right product liability framework can incentivize transparency and safety while holding AI developers accountable for their products.

“The message we’re sending is that AI developers have a duty of care that’s no different than any other manufacturer,” says Transparency Coalition co-founder Rob Eleveld. “If you sell a product, you have a duty of care to make sure people aren’t harmed by that product.”

AI liability as a rising issue

In recent months there’s been a rising interest in product liability statutes as a way of incentivizing the development of ethical, transparent, and safe AI systems. The Center for Humane Technology recently released a proposed legal liability framework designed to “encourage and facilitate the responsible development and use of the riskiest AI systems, provide certainty for companies, and promote accountability to individual and business consumers.”

That framework would place liability largely with AI developers—companies like OpenAI and Anthropic who manufacture the AI systems—and not with the thousands of companies that pay to use the AI systems as second-party deployers.

“If you sell a product, you have a duty of care to make sure people aren’t harmed by that product.”

Transparency Coalition co-founder Rob Eleveld

 

duty of care: a longstanding legal principle

The ‘duty of care’ principle goes back to English common law. In its simplest form, the law recognizes that one person has an obligation to take proper care to avoid causing injury to another person.

That idea has expanded over the centuries to include the tenet that manufacturers and sellers have a duty of care towards consumers. They are legally obligated to ensure their products are safe for their intended use. Product liability laws, mostly established at the state level, evolved to precisely define these obligations and give businesses certainty about their legal responsibilities.

Who is liable? courts and lawmakers to decide

In today’s shifting, fast-evolving artificial intelligence ecosystem, there exists a huge amount of uncertainty around harm and liability with regard to AI models, systems, and chatbots. When an AI system deployed by a second-party company harms a consumer, who is at fault—the deployer, the AI system’s original developer, or nobody?

That theoretical question will soon be put to the real-world test in federal court.

One of the earliest and most heartbreaking legal tests is coming via a lawsuit filed by the mother of a 14 year-old Florida boy who committed suicide in Feb. 2024 after becoming obsessed with an AI chatbot manufactured by the company Character.ai. The Florida mom has accused the artificial intelligence company’s product of initiating “abusive and sexual interactions” with her son and encouraging him to take his own life.

The mother of 14 year-old Sewell Setzer, left, has filed a federal lawsuit that claims a Character.ai chatbot encouraged her son to take his own life.

no section 230 immunity for ai products

Many of today’s most powerful and popular AI products are manufactured by the world’s leading tech companies, including Google, Microsoft, and Meta. For decades, those companies have enjoyed relative immunity from product liability laws thanks to Section 230 of the 1996 Communications Decency Act. That federal law provided liability immunity based on the premise that platforms like Google Search, Facebook, and Instagram are free speech forums protected by the First Amendment.

Currently, most AI developers are operating under the unspoken assumption that products like Character.ai and ChatGPT are covered by Section 230.

AI systems, however, are not platforms. They are products. Companies like OpenAI and Anthropic manufacture specific product releases like ChatGPT 4.0 and Claude 3.5 Haiku. Adobe officials publicly refer to its AI tools as products. Mike Krieger, a co-founder of Instagram, recently joined Anthropic as its CPO: Chief Product Officer.

A top priority for 2025

In the coming year, Transparency Coalition leaders will continue to strongly advocate for the expansion of transparency and disclosure laws, following on the success of California’s AI Transparency Act and Training Data Transparency Act. TCAI founders Eleveld and Jai Jaisimha believe that strong AI-directed liability laws enacted at the state level may be one of the most powerful tools for incentivizing transparency, safety, and accountability in AI development.

We will continue to roll out resources for policymakers and thought leaders in an effort to broaden the discussion and spur legislative proposals that encourage innovation, transparency, and accountability in the artificial intelligence world.

dive deeper

Further resources:

Next
Next

Washington State AI Task Force raises AI transparency, accountability as priorities for 2025 session