What is a ‘duty of care’ and how does it apply to artificial intelligence?

A company’s board of directors has a basic duty of care to not cause harm to others, or to release products that cause harm. (Image by Kathrine Jølle Wathne from Pixabay.)

In English common law, the duty of care is a legal obligation that requires a person or groupto take reasonable steps to avoid causing harm to others.

Members of society have entered a social contract that includes a duty to not cause harm to others. When considering legal duty as an element of negligence, there is a duty to act reasonably. When a jury is asked to determine if a person or company’s actions were reasonable, the standard is what a reasonable person would have done in the defendant’s situation.

duty of care as a legal duty

Here are some ways to determine if a company owes a duty of care to consumers:

  • The company engaged in the creation of risk which resulted in the consumer’s harm.

  • The company volunteered to protect the consumer from harm, possibly preventing others from protecting the consumer.

  • The company knows or should know that its conduct will harm consumers.

  • Certain relationships—such as business owner and customer, innkeeper and guest, land owner and lessee, doctor and patient—create duties of care.

duty of care as a fiduciary duty

In the corporate world, the duty of care is also a fiduciary duty, performed in good faith, requiring directors and officers of a company to make decisions that pursue the corporation’s interests with reasonable diligence and prudence. This fiduciary duty is owed by directors and officers to the corporation, not the corporation’s stakeholders or broader society.

 

AI and liability concerns

An attorney who specializes in liability law recently offered his view of AI liability concerns for developers and deployers:

“If the AI model is considered a ‘product,’ creators could be held responsible under product liability law, which allows for liability when a product has a defect that makes it unreasonably dangerous.

AI creators could also be liable under a traditional theory of negligence, which requires proving that the creators owed a duty of care to the plaintiff, they breached that duty by failing to act with reasonable care in the design, development or deployment of the AI, the breach caused harm to the plaintiff and the harm was a foreseeable consequence of the creators’ actions.

In some cases, courts might even impose strict liability on the AI creators, holding them responsible for the harm caused by the AI regardless of their intentions or how careful they were, in cases where AI is seen as an inherently dangerous product or activity.”

 

For deployers, Gen AI requires ‘duty of oversight’

As generative artificial intelligence technologies become increasingly important, boards of directors, executive officers, and in-house legal teams managing publicly held companies must take a proactive approach to navigate the opportunities and risks associated with GenAI, consistent with the board’s fiduciary duties.

We’re not talking about AI developers like Meta and OpenAI. We’re talking about every company that deploys an AI system within its operation.

Corporate governance principles require directors to manage corporations consistent with their fiduciary duty to act in the best interest of shareholders. The board’s fiduciary duty is comprised of three specific obligations: the duty of care, the duty of loyalty, and the more recently established derivative of the duty of care, the duty of supervision or oversight.

The duty of supervision stems from the 1996 Caremark case, where the Delaware Court of Chancery decided that the board has a duty to assure the adequate existence and operation of corporate information and reporting systems. The Caremark principles were further clarified:

  • Ina 2021 lawsuit against Boeing, the Court established an enhanced duty of supervision where the nature of a corporation’s business presents unique or extraordinary risk.

  • In 2023, the Caremark duty of supervision was extended to executive management in litigation against McDonald’s Corporation.

lack of transparency puts corporate deployers at risk

Many things can go wrong with the use of generative AI: hallucinations, deepfakes, algorithmic bias, and difficulties in evaluating an automated decision-making tool.

Each of those things that can go wrong exposes a publicly held company to material risk.

It is critically important for company officials and board members to understand the quality and level of risk inherent in the AI systems the company deploys. Currently, AI developers are not required to disclose any information about the provenance, purpose, or training data used in the creation of their AI models. When it comes to AI systems and quality control, companies today are acting on blind faith.

This is extremely irregular. In other business realms, transparency and quality assurance are standard practice. Corporations regularly undergo financial audits to provide assurance to investors, creditors, suppliers, and vendors. Software developers commonly offer test reports to their clients to demonstrate quality assurance.

By operating without any insight into the data used to train today’s foundational AI systems, corporate officials and board members risk broaching their duty of supervision or oversight.

Transparency offers a path to fulfilling duty of care

When AI developers are required to document and publish basic information about the datasets upon which AI models are trained, companies using those AI systems will be able to properly evaluate the quality and manage the risks inherent in their use.

‘Duty of care’ means understanding how an AI system was trained, and how it operates. Transparency gives company officials the tools to fulfill that duty of care.

 

ethical tech companies will welcome a ‘duty of care’ approach

A helpful explainer from the Business & Human Rights Resource Centre:

Powerful legal and regulatory tools require companies to demonstrate a ‘duty of care’ in designing and producing their goods so they are safe for release and use.

These laws demand companies assess the risk of their products and demonstrate clear efforts to mitigate them, before and after the product is released onto the market.

Toasters and microwave ovens undergo rigorous testing. New automobile models must meet exacting safety standards. In democratic societies, this regulatory approach in the physical realm usually works well—including where design advances quickly. The same method is available with regard to tech companies and AI.

With fast-moving technology, this approach future-proofs our societies’ regulations and the rights of people: It is the companies that launch and profit from these technologies that must assess the human rights risks of new digital designs and ensure they are safe, or face heavy penalties.

The European Union recently implemented (effective July 2024) the most powerful and relevant legislation to date:the Corporate Sustainability Due Diligence Directive (CSDDD).

The EU’s CSDDD demands companies assess likely and severe human rights and environmental risks and impacts their business model generates across their full value chain. They must then take reasonable steps to prevent risks, or end and remedy the harm. If they fail in the duty of care, then they face civil liability risks and costly administrative punishments.

This approach now needs to be applied robustly to the digital realm. Responsible tech companies will welcome a duty of care approach. Investment in their due diligence will become far less costly than the price of liability for reckless product releases. Rising public concern about the power and irresponsibility of tech giants is driving active pursuit of legal accountability. Courts, regulators, and politicians are answering this call in greater numbers.

Learn More:

Next
Next

TCAI Guide to Search Tools: Was Your Data Used to Train an AI Model?