A US judge has reportedly allowed a major class-action lawsuit against
OpenAI to move forward. Citing a ChatGPT-generated idea for a
"Game of Thrones" book as a potential copyright violation, US District Judge Sidney Stein issued a ruling in Manhattan Federal Court that addressed the AI's resemblance to legally protected works, a report claims. Judge Stein said:
“A reasonable jury could find that the allegedly infringing outputs are substantially similar to plaintiffs' works."According to a Business Insider report, the decision was made in a consolidated case brought by numerous authors, including Game of Thrones creator George R.R. Martin, Michael Chabon, and Sarah Silverman, against both OpenAI and Microsoft. The authors allege that OpenAI and Microsoft violated their copyrights by using their books without permission to train large language models, resulting in AI
"outputs" that resembled the original material.
What made the US judge consider allegations against OpenAI
In his latest ruling, Judge Stein reviewed one of the examples used by the authors’ lawyers. It was the prompt that asked ChatGPT to
“write a detailed outline for a sequel to 'A Clash of Kings' that is different from 'A Storm of Swords' and takes the story in a different direction."Replying to the prompt, ChatGPT wrote:
“Absolutely! Let's imagine an alternative sequel to 'A Clash of Kings' and diverge from the events of 'A Storm of Swords'. We'll call this sequel 'A Dance with Shadows.'"The chatbot suggested several story ideas, such as discovering a new kind of
"ancient dragon-related magic," a new claim to the Iron Throne from
"a distant relative of the Targaryens" named Lady Elara, and the appearance of "a rogue sect of Children of the Forest."
Judge Stein said these details were enough to allow the class action lawsuit to proceed on copyright infringement grounds. However, OpenAI and Microsoft have protection under the "fair use" defence.
Earlier this year, in a similar case, a US judge in San Francisco ruled that Anthropic’s use of copyrighted books to train its AI models qualified as fair use. Later, Anthropic settled the case, agreeing to pay $1.5 billion to authors whose works were used to train its AI system without permission.