TCAI argues for AI openness and accountability at 2024 Nordic Innovation Summit

TCAI’s Rob Eleveld at the Nordic Innovation Summit: ‘Privacy laws are being run over by OpenAI.’ Photo courtesy Nordic Innovation Summit.

Speaking to an audience of more than 200 business and tech innovators, Transparency Coalition.AI founder Rob Eleveld yesterday advanced the cause of openness and accountability in the development and deployment of artificial intelligence systems.

“We believe there should be guardrails around AI,” Eleveld told attendees at the 2024 Nordic Innovation Summit in Seattle. “We believe it can be done without tamping down innovation.”

The Innovation Summit, an annual gathering of business, government, and tech leaders at the National Nordic Museum, offered a two-day program of presentations on AI, aviation, green energy, and urban planning, drawing new ideas from the Nordic nations—Sweden, Norway, Finland, Denmark, and Iceland. Attendees included Washington Gov. Jay Inslee, Swedish State Secretary Sara Modig, Finnish Ambassador Mikko Hautala, Danish Ambassador Jesper Møller Sørensen, as well as executives from Microsoft, Nokia, Corvus Energy, Hiya, BLOXHUB, Neste, and TerraPower.

Responsible AI took center stage on Tuesday morning as Eleveld joined Mary Snapp, Microsoft Vice President of Strategic AI Initiative; Joseph William, Washington State Information and Communications Technology Sector Lead; and Mårten Mickos, CEO of HackerOne, the San Francisco-based security vulnerability platform.

2024 Nordic Innovation Summit AI panel, L-R: Mårten Mickos, Rob Eleveld, Mary Snapp, Joseph Williams. Photo courtesy Nordic Innovation Summi9t.

AI: THREAT OR OPPORTUNITY?

“Whether you believe AI is a threat or an opportunity, you are correct,” Mickos said to kick off the discussion. He pointed to a number of AI-caused “trainwrecks” that caused businesses embarrassment, financial loss, and brand damage.

“We have these trainwrecks because companies are so eager to go out [with an AI system] that they don’t take the precaution of preparing and testing,” Mickos said. “We all know you all need to be prepared first. But businesses don’t think that way. They have boards, CEO, and others who say, ‘Let’s do AI because otherwise we’ll look stupid and old-fashioned. Our competitors will beat us.’”

What’s needed, Mickos said—beyond better preparation and testing—is an environment that mixes competition with cooperation when it comes to AI safety. “We have learned, in areas like aviation, that the only way to put things in a a trustworthy, safe, secure state, is by working together.” He added: “In aviation, manufacturers and airlines are competing with each other, but if there is one screw loose somewhere they will tell everybody. They will share that blamelessly with everybody, so that everybody might check to see if they might have the same problem. We haven’t yet learned that in cybersecurity and AI safety yet. We’re a little bit too cocky, in a way, thinking we can figure it out on our own.”

See the full panel video:

ALL COMPETITION, NO TRANSPARENCY

TCAI’s Rob Eleveld pointed out that today’s AI landscape has no such balance—it’s all competition, zero cooperation. The privacy of personal data, and IP and copyright protections are being ignored in the rush to train large language models (LLMs) on the largest datasets available. “There are 17 states that have privacy laws right now,” he said. “Without question, those privacy laws are being run over roughshod by OpenAI sucking every bit of data they can.”

Eleveld pointed to last week’s revelation that OpenAI, creator of ChatGPT, created two massive datasets of books containing roughly 50 billion words, and used the datasets to train ChatGPT-3, a previous version of today’s publicly available ChatGPT 3.5 system. According to allegations in the Authors Guild lawsuit against OpenAI, the company later deleted all copies of the datasets.

“OpenAI just deleted their ‘Books One’ and ‘Books Two,’ datasets used to train their model,” Eleveld said. “That was unearthed as part of the due diligence in the Writers Guild lawsuit against them. They deleted those datasets. Why? They wanted to hide something.”

In other words: OpenAI may have deleted the datasets because they contained copyrighted material that OpenAI did not have permission to use. That issue is at the heart of a number of lawsuits that have been filed against OpenAI by the Authors Guild, The New York Times, and others.

Microsoft’s Mary Snapp pointed audience members to her company’s work on AI accountability and transparency. Earlier this month Microsoft published its first annual Responsible AI Transparency Report. That report provides insight into how the company build applications that use generative AI, how their deployment is overseen, and how customers are supported in the use of those products.

Snapp agreed with the need for oversight as AI continues to evolve. ”Large Language Models” in particular, she said, “need some guardrails to insure they are developed in a way we can insure there is as much safety and security as possible.”

Joseph Williams, the Tech Sector Lead with the Washington State Department of Commerce, commented that AI companies actually need guardrails for their own self-interest.

Companies need the surety of government guardrails to know where the legal boundaries are, he said. “They need an insular environment so they don’t get sued all the time for everything. There are all kinds of legal [risks] they want relief from, and that comes with the guardrails that are imposed by a regulatory regime.”

Appropriate regulation would also help AI developers and deployers by reassuring a public still uncertain about the prospect of AI, Williams said. “We don’t know how people feel about this yet,” he said.

Eleveld pointed out the many ways that private information, including medical information, is already being tracked by large tech companies like Alphabet (Google) and Meta (Facebook).

“Consumers are under the impression that their private medical information is protected by HIPAA,” he said. “Well, HIPAA only applies to hospitals, medical institutions, and insurance companies. Google knows everywhere you’ve been on your maps. So does Facebook, based on where you’ve posted, checked in, and so forth. If you went to a mental health clinic, if you went to an abortion clinic, if you went to receive treatment for cancer, that’s absolutely private medical information. It’s not protected by HIPAA, it’s being sucked into large language models and consumers don’t know it.

Companies like Google and Meta don’t have a good track record when it comes to protecting consumers, Eleveld added. “They are about data targeted advertising. And consumers don’t know what’s happening. So their elected representatives need to look out for them.”

Previous
Previous

OpenAI’s long-term safety team disbands

Next
Next

Is this bill worth signing? Transparency gutted from Colorado’s big AI package