In landmark case, AI developer Anthropic agrees to pay $1.5 billion for copyright infrigement
Cade Metz
In a landmark settlement, Anthropic, a leading artificial intelligence company, has agreed to pay $1.5 billion to a group of authors and publishers after a judge ruled it had illegally downloaded and stored millions of copyrighted books.

Anthropic’s Claude chatbot has become one of the world’s most popular A.I. systems.
The settlement is the largest payout in the history of U.S. copyright cases. Anthropic will pay $3,000 per work to 500,000 authors.
The agreement is a turning point in a continuing battle between A.I. companies and copyright holders that spans more than 40 lawsuits across the country. Experts say the agreement could pave the way for more tech companies to pay rights holders through court decisions and settlements or through licensing fees.
“This is massive,” said Chad Hummel, a trial lawyer with the law firm McKool Smith, who is not involved in the case. “This will cause generative A.I. companies to sit up and take notice.”
The agreement is reminiscent of the early 2000s, when courts ruled that file-sharing services like Napster and Grokster infringed on rights holders by allowing copyrighted songs, movies and other material to be shared free on the internet.
“This is the A.I. industry’s Napster moment,” said Cecilia Ziniti, an intellectual-property lawyer who is now chief executive of the artificial intelligence start-up GC AI.
The settlement came after a ruling in June by Judge William Alsup of the U.S. District Court for the Northern District of California. In a summary judgment, the judge sided with Anthropic, maker of the online chatbot Claude, in significant ways. Most notably, he ruled that when Anthropic acquired copyrighted books legally, the law allowed the company to train A.I. technologies using the books because this transformed them into something new.
“The training use was a fair use,” he wrote. “The technology at issue was among the most transformative many of us will see in our lifetimes.”
But he also found that Anthropic had illegally acquired millions of books through online libraries like Library Genesis and Pirate Library Mirror that many tech companies have used to supplement the huge amounts of digital text needed to train A.I. technologies. When Anthropic downloaded these libraries, the judge ruled, its executives knew they contained pirated books.
Anthropic could have purchased the books from many sellers, the judge said, but instead preferred to “steal” them to avoid what the company’s chief executive, Dario Amodei, called “legal/practice/business slog” in court documents. Companies and individuals who willfully infringe on copyright can face significantly higher damages — up to $150,000 per work — than those who are not aware they are breaking the law.
After the judge ruled the authors had cause to take Anthropic to trial over the pirated books, the two sides decided to settle.
“This settlement sends a powerful message to A.I. companies and creators alike that taking copyrighted works from these pirate websites is wrong,” said Justin A. Nelson, a lawyer for the authors who brought the lawsuit against Anthropic.
As part of the settlement, Anthropic said it did not use any pirated works to build A.I. technologies that were publicly released. The settlement also gives any others the right to still sue Anthropic if they believe that the company’s technologies are reproducing their works without proper approval. Anthropic also agreed to delete the pirated works it downloaded and stored.
“In June, the District Court issued a landmark ruling on A.I. development and copyright law, finding that Anthropic’s approach to training A.I. models constitutes fair use,” Aparna Sridhar, Anthropic’s deputy general counsel, said in a statement.
“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe A.I. systems that help people and organizations extend their capabilities, advance scientific discovery and solve complex problems.”
Mr. Hummel of the law firm McKool Smith said that “Anthropic saw the writing on the wall that there was a substantial amount of liability.”
It is not clear if other A.I. companies could face similar liabilities for their use of the libraries, though they face some risk.
During a deposition, a founder of Anthropic, Ben Mann, testified that he also downloaded the Library Genesis data set, which contained pirated material, when he was working for OpenAI in 2019. He said he had assumed this was “fair use” of the material. In a separate lawsuit, more than a dozen authors have accused OpenAI of copyright infringement. This case, filed in New York City in 2023, has yet to be decided.
__________________
Credit: New York Times


























