OpenAI has petitioned a federal judge to dismiss certain portions of The New York Times’ copyright lawsuit, contending that the newspaper “paid someone to hack ChatGPT” and other artificial intelligence (AI) systems to fabricate misleading evidence.
In a filing at a Manhattan federal court on Monday, OpenAI asserted that The NYT induced the technology to replicate its content through “deceptive prompts that violate OpenAI’s terms of use.”
OpenAI refrained from naming the individual it alleges The NYT enlisted to manipulate its systems, thus avoiding accusations of the newspaper violating anti-hacking laws.
In the filing, OpenAI stated:
“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards.
The truth, which will come out in this case, is that the Times paid someone to hack OpenAI’s products.”
OpenAI’s assertion of “hacking” is, according to the newspaper’s attorney Ian Crosby, merely an attempt to exploit OpenAI’s products to uncover evidence of the purported theft and replication of The NYT’s copyrighted work.
In December 2023, The NYT initiated legal action against OpenAI and its primary financial backer, Microsoft.
The lawsuit alleges the unauthorised utilisation of millions of NYT articles to train chatbots disseminating information to users.
Drawing from both the United States Constitution and the Copyright Act, the lawsuit defends The NYT’s original journalism. It also implicates Microsoft’s Bing AI, alleging the creation of verbatim excerpts from its content.
READ MORE: Grayscale’s Bitcoin ETF Records Record Low Outflows Amidst Rising Market Momentum
The New York Times is one of many copyright holders litigating against tech companies for purportedly misappropriating their content in AI training. Other factions, including authors, visual artists, and music publishers, have similarly lodged lawsuits.
OpenAI has previously argued that training advanced AI models without incorporating copyrighted works is “impossible.”
In a submission to the United Kingdom House of Lords, OpenAI contended that, as copyright encompasses a broad array of human expressions, training leading AI models sans copyrighted materials would be unviable.
Tech companies contend that their AI systems ethically use copyrighted material, emphasising that such lawsuits imperil the growth of a potentially multitrillion-dollar industry.
Courts have yet to ascertain whether AI training qualifies as fair use under copyright law.
Nevertheless, some infringement claims concerning outputs from generative AI systems were dismissed due to insufficient evidence demonstrating that the AI-generated content bore resemblance to copyrighted works.
Read the latest crypto news today
Disclaimer: This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice.
Read on Crypto Intelligence Investment Disclaimer