Former OpenAI, Meta, Google officials urge Congress to adopt AI safety, transparency rules

Former OpenAI board member Helen Toner testified on Tuesday before a Senate panel: "Set transparency requirements" and incentivize safety, she urged. 

In an unusually candid two-hour hearing yesterday, former officials from major AI companies Google, Meta, and OpenAI urged members of the Senate Judiciary Committee to enact legislation to protect society and encourage innovation.

“Clear regulation is advantageous to innovation in AI,” said David Evan Harris, a former member of Meta’s responsible AI team. “CEOs of the leading AI developers have publicly called for regulation. But I see a disconnect: As they call for regulation, tech lobbyists come in with the goal of killing or hobbling every piece of legislation related to AI.”

Harris told the committee that having nationwide “rules of the road” would greatly benefit AI companies. Without a clear federal law, he said, AI developers “will end up tied up in court, leading to more cost and confusion, trying to comply” with 50 different laws enacted by 50 different states.

Harris, who now teaches at UC Berkeley, testified before the Judiciary Committee yesterday in a session focused on AI oversight. The hearing also featured former Google AI research scientist Margaret Mitchell, former OpenAI official William Saunders, and former OpenAI board member Helen Toner.

 No federal AI safeguards exist

While the European Union has adopted clear regulatory schemes like the EU AI Act and the General Data Protection Regulation (GDPR), there is no federal law covering artificial intelligence and data privacy in the United States.

That’s led state legislatures to start enacting their own laws to protect personal data and intellectual property, outlaw malicious deepfakes, and require transparency in AI training and use. Many AI critics and advocates agree on one point: Congressional legislation would clarify and simplify the rules for everyone.

 A culture of reckless risk-taking

Helen Toner served on the nonprofit board that governs OpenAI, from 2021 until late 2023. She resigned due to concerns over the safety of the company’s AI products, and a lack of trust in CEO Sam Altman. Toner told the Senate committee that the science of measuring AI risks “is extremely immature.” Companies working towards advanced AI are “subject to enormous pressure to move fast, beat their competitors to market, and raise money from investors.”

Left to their own devices, the witnesses said, major AI companies reward a “move fast and break things” mentality while devaluing safety and integrity initiatives.

 Speed is incentivized, safety devalued

“OpenAI has specific goals,” William Saunders explained. “One of them is ‘maintaining research velocity.’ So when it comes to security, OpenAI is reluctant to do things that might slow down researchers. There are enormous incentives to be seen as leading in the AI space. We need some kind of regulation to incentivize doing the right thing. Otherwise you’re fighting against company culture and internal incentives.”

Former Google AI research scientist Margaret Mitchell echoed that thought. “It’s difficult to get promoted if you focus on safety,” she said. “Your work is preventing bad things from happening, but if they don’t happen it’s impossible to prove they would have happened without your work.” Engineers and managers who work in safety, she said, are far less likely to rise to leadership roles within the company.

 start with transparency requirements

Toner, the former OpenAI board member, offered specific ideas about where to begin.

Congress should “set transparency requirements for developers of high-stakes AI systems,” she said, “including requirements regarding training data, capability testing, safety testing, risk management practices, internal deployments, safety cases, and real-world incidents.”

Toner also urged lawmakers to actively support the development of “a rigorous third-party audit ecosystem, for example by requiring audits for some AI systems and establishing a federal authority that can license auditors.” Independent audits and certifications, she said, lie at the heart of effective regulation in many other industries—and they operate without stifling innovation or growth.

To incentivize a duty of care within AI companies, Toner asked lawmakers to clarify how liability for AI harms should be allocated. That would act as a real economic force to counterbalance the enormous incentives given by investors and the market to companies seen as cutting-edge risk-takers.

Congress: many AI bills, no action

Sen. Richard Blumenthal (D-CT) and Sen. Josh Hawley (R-MO), the chair and ranking member, respectively, of the Senate Judiciary Committee, last year introduced their Bipartisan Framework for U.S. AI Act in late 2023, which offered an outline of major goals for Congressional AI legislation. There are currently more than 120 AI-related bills floating around Congress, but none have gained significant traction.

Meanwhile, 678 AI-related bills were introduced in 45 states during the 2024 state legislative sessions. Of those, 80 were enacted into law.

During his many years in office, Blumenthal said, he has heard one consistent refrain from business executives in a variety of industries. “‘Give us the rules,’ they say. ‘Just tell us the rules. They need to be clear and they need to be stable.’ I take them at their word.”

“We recognize the enormous good that can come from AI,” Blumenthal added. “But there are also dangers and potential downsides. The point here is to deal with the downsides, and not rely only on the creators to oversee what they’re doing. We need to set out some rules of the road to protect both them and the public.”

Published Sept. 18, 2024

Previous
Previous

Gov. Newsom signs California AI Transparency Act into law, a historic first for AI disclosure

Next
Next

Getty Images unveils ‘cleanest’ visual dataset for training AI models