OpenAI and the White House have accused DeepSeek of using ChatGPT to inexpensively train its new chatbot.
- Experts in tech law say OpenAI has little recourse under intellectual property and agreement law.
- OpenAI's terms of usage might apply but are largely unenforceable, they say.
This week, OpenAI and the White House implicated DeepSeek of something akin to theft.
In a flurry of press statements, they stated the Chinese upstart had bombarded OpenAI's chatbots with inquiries and hoovered up the resulting data trove to rapidly and inexpensively train a model that's now nearly as excellent.
The Trump administration's top AI czar said this training process, king-wifi.win called "distilling," amounted to copyright theft. OpenAI, meanwhile, informed Business Insider and other outlets that it's investigating whether "DeepSeek may have inappropriately distilled our designs."
OpenAI is not saying whether the business plans to pursue legal action, instead guaranteeing what a spokesperson called "aggressive, proactive countermeasures to secure our innovation."
But could it? Could it sue DeepSeek on "you stole our material" premises, bphomesteading.com much like the premises OpenAI was itself sued on in an ongoing copyright claim filed in 2023 by The New York City Times and other news outlets?
BI presented this concern to specialists in technology law, who stated challenging DeepSeek in the courts would be an uphill struggle for OpenAI now that the content-appropriation shoe is on the other foot.
OpenAI would have a difficult time proving an intellectual residential or commercial property or dokuwiki.stream copyright claim, qoocle.com these lawyers said.
"The concern is whether ChatGPT outputs" - indicating the answers it creates in reaction to inquiries - "are copyrightable at all," Mason Kortz of Harvard Law School stated.
That's since it's unclear whether the responses ChatGPT spits out qualify as "creativity," he stated.
"There's a doctrine that says creative expression is copyrightable, however truths and concepts are not," Kortz, who teaches at Harvard's Cyberlaw Clinic, said.
"There's a huge question in intellectual home law today about whether the outputs of a generative AI can ever constitute creative expression or if they are always unprotected realities," he added.
Could OpenAI roll those dice anyhow and declare that its outputs are secured?
That's not likely, wiki.philo.at the lawyers said.
OpenAI is already on the record in The New york city Times' copyright case arguing that training AI is an allowable "fair usage" exception to copyright security.
If they do a 180 and inform DeepSeek that training is not a fair usage, "that might come back to kind of bite them," Kortz said. "DeepSeek could state, 'Hey, weren't you simply stating that training is reasonable use?'"
There might be a difference between the Times and DeepSeek cases, Kortz added.
"Maybe it's more transformative to turn news posts into a model" - as the Times implicates OpenAI of doing - "than it is to turn outputs of a model into another design," as DeepSeek is stated to have done, Kortz stated.
"But this still puts OpenAI in a pretty predicament with regard to the line it's been toeing regarding reasonable use," he added.
A breach-of-contract lawsuit is more most likely
A breach-of-contract lawsuit is much likelier than an IP-based lawsuit, users.atw.hu though it comes with its own set of issues, stated Anupam Chander, who teaches innovation law at Georgetown University.
Related stories
The terms of service for complexityzoo.net Big Tech chatbots like those established by OpenAI and Anthropic forbid using their material as training fodder for a competing AI model.
"So perhaps that's the claim you may possibly bring - a contract-based claim, not an IP-based claim," Chander said.
"Not, 'You copied something from me,' but that you benefited from my model to do something that you were not permitted to do under our contract."
There might be a drawback, Chander and Kortz stated. OpenAI's terms of service need that a lot of claims be dealt with through arbitration, not lawsuits. There's an exception for suits "to stop unauthorized use or abuse of the Services or intellectual residential or commercial property infringement or misappropriation."
There's a bigger drawback, though, specialists stated.
"You ought to understand that the brilliant scholar Mark Lemley and a coauthor argue that AI terms of use are likely unenforceable," Chander said. He was referring to a January 10 paper, "The Mirage of Expert System Terms of Use Restrictions," by Stanford Law's Mark A. Lemley and Peter Henderson of Princeton University's Center for Information Technology Policy.
To date, "no model creator has in fact attempted to impose these terms with financial charges or injunctive relief," the paper says.
"This is most likely for great reason: we believe that the legal enforceability of these licenses is doubtful," it includes. That's in part because design outputs "are largely not copyrightable" and since laws like the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act "deal limited recourse," it says.
"I believe they are likely unenforceable," Lemley told BI of OpenAI's regards to service, "due to the fact that DeepSeek didn't take anything copyrighted by OpenAI and because courts generally will not enforce contracts not to complete in the absence of an IP right that would prevent that competition."
Lawsuits in between parties in different nations, each with its own legal and enforcement systems, are always challenging, Kortz stated.
Even if OpenAI cleared all the above obstacles and won a judgment from an US court or arbitrator, "in order to get DeepSeek to turn over money or stop doing what it's doing, the enforcement would come down to the Chinese legal system," he said.
Here, OpenAI would be at the mercy of another very complicated area of law - the enforcement of foreign judgments and the balancing of private and corporate rights and national sovereignty - that extends back to before the founding of the US.
"So this is, a long, made complex, laden procedure," Kortz included.
Could OpenAI have secured itself much better from a distilling attack?
"They could have used technical steps to block repetitive access to their site," Lemley stated. "But doing so would also interfere with normal clients."
He added: "I don't believe they could, or should, have a legitimate legal claim versus the browsing of uncopyrightable information from a public website."
Representatives for DeepSeek did not instantly react to an ask for remark.
"We know that groups in the PRC are actively working to utilize methods, including what's referred to as distillation, to attempt to replicate innovative U.S. AI designs," Rhianna Donaldson, an OpenAI spokesperson, informed BI in an emailed declaration.