More

    OpenAI’s Bold Claims of NYT “Hacking” Attempt to Shift Blame

    In a dramatic filing this week, AI startup OpenAI made the bold accusation that The New York Times “hacked” its systems, including the popular ChatGPT chatbot, in order to manufacture evidence of copyright infringement. OpenAI asked a federal judge to dismiss parts of the Times’ high-profile lawsuit as a result.

    The filing alleges that the Times used “tens of thousands of attempts” with “deceptive prompts” to get ChatGPT and other AI systems to regurgitate Times articles nearly verbatim. OpenAI claimed this does not happen “in the ordinary course” and that the Times violated its terms of service.

    But the Times swiftly rebutted these hacking claims, stating that OpenAI “bizarrely mischaracterizes as ‘hacking'” what is actually a common technique called prompt engineering. This involves carefully crafting inputs to test the boundaries of an AI system. The Times says it simply used OpenAI’s products to uncover the extensive copying of its articles.

    Legal experts see OpenAI’s accusations as a tactical move to shift attention and blame away from the core copyright issues. The Times’ lawsuit poses an existential threat to OpenAI’s business model, which relies heavily on scraping vast amounts of copyrighted data to train its AI.

    As Susman Godfrey partner Ian Crosby, lead counsel for the Times, stated: “What OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted works. And that is exactly what we found.”

    AI Industry Reliant on Copyrighted Data

    At the crux of the legal case is whether OpenAI’s usage of Times articles and other copyrighted material for AI training constitutes fair use under US law. OpenAI insists its models cannot function properly without this data.

    As the company wrote in its latest filing: “It would be impossible to train today’s leading AI models without using copyrighted materials.”

    This reliance on copyrighted data is an Achilles heel for OpenAI. If publishers succeed in limiting its access, the accuracy and capabilities of systems like ChatGPT would suffer.

    OpenAI CEO Sam Altman seemed to downplay the Times’ role last month in Davos, stating: “We actually don’t need to train on their data… Any one particular training source, it doesn’t move the needle for us that much.”

    Yet in this latest filing, OpenAI suggests restricting data access would hinder development, saying: “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.”

    So while no single publisher may be decisive, collectively they could pose a real threat. This explains OpenAI’s urgent efforts to strike content licensing deals with media outlets.

    Monetization Plans Threatened

    The final motivation behind OpenAI’s aggressive defense is the vast fortune at stake. Backed by $10 billion from Microsoft so far, the startup is racing to monetize AI breakthroughs like ChatGPT in various industries.

    But the Times’ lawsuit, and copycat cases, jeopardize those plans. If courts side against fair use arguments, OpenAI would face licensing fees or damages running into billions of dollars.

    Altman remains publicly optimistic, stating last month: “I think we’re going to win on the law here.” OpenAI’s filing echoes that confidence, claiming: “The Times cannot prevent AI models from acquiring knowledge about facts.”

    But privately, the mounting lawsuits must be deeply concerning. OpenAI finds itself engaged in a high-stakes battle which pits its commercial ambitions against fundamental copyright principles.

    By accusing the venerable New York Times of hacking, the startup is undertaking a risky PR strategy to avoid taking responsibility. But the facts of the case seem squarely on the side of the publishers so far.

    Unless OpenAI can prove its fair use arguments, this David vs Goliath lawsuit threatens severe repercussions for the company described by some as the “A.I. lab of the future”.


    Copyright©dhaka.ai

    tags: Artificial Intelligence, Ai, Dhaka Ai, Ai In Bangladesh, Ai In Dhaka, USA

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img