Former OpenAI researcher says the company broke copyright law
Oct. 23, 2024 (via The New York Times) — A former OpenAI researcher is speaking out about his experience at the artificial intelligence company—and it aligns with what many of the company’s critics are saying.
In August, Suchir Balaji left OpenAI because he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit. In his nearly four years as an artificial intelligence researcher, he helped gather and organize the enormous amounts of internet data the company used to build its online chatbot, ChatGPT.
At the time, he did not carefully consider whether the company had a legal right to build its products in this way. He assumed the San Francisco start-up was free to use any internet data, whether it was copyrighted or not.
But after the release of ChatGPT in late 2022, he thought harder about what the company was doing. He came to the conclusion that OpenAI’s use of copyrighted data violated the law and that technologies like ChatGPT were damaging the internet.
Many researchers who have worked inside OpenAI and other tech companies have cautioned that A.I. technologies could cause serious harm. But most of those warnings have been about future risks, like A.I. systems that could one day help create new bioweapons or even destroy humanity.
Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.