OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

  • blargerer@kbin.social
    link
    fedilink
    arrow-up
    61
    arrow-down
    4
    ·
    1 year ago

    Its not clear that training on copyrighted material is in breach of copyright. It is clear that regurgitating copyrighted material is in breach of copyright.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      edit-2
      1 year ago

      Sure but who is at fault?

      If I manually type an entire New York Times article into this comment box, and Lemmy distributes it all over the internet… that’s clearly a breach of copyright. But are the developers of the open source Lemmy Software liable for that breach? Of course not. I would be liable.

      Obviously Lemmy should (and does) take reasonable steps (such as defederation) to help manage illegal use… but that’s the extent of their liability.

      All NYT needed to do was show OpenAI how they go the AI to output that content, and I’d expect OpenAI to proactively find a solution. I don’t think the courts will look kindly on NYT’s refusal to collaborate and find some way to resolve this without a lawsuit. A friend of mine tried to settle a case once, but the other side refused and it went all the way to court. The court found that my friend had been in the wrong (as he freely admitted all along) but also made them pay my friend compensation for legal costs (including just time spent gathering evidence). In the end, my friend got the outcome he was hoping for and the guy who “won” the lawsuit lost close to a million dollars.

      • CleoTheWizard@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        They might look down upon that but I doubt they’ll rule against NYT entirely. The AI isn’t a separate agent from OpenAI either. If the AI infringes on copyright, then so does OpenAI.

        Copyright applies to reproduction of a work so if they build any machine that is capable of doing that (they did) then they are liable for it.

        Seems like the solution here is to train data to not output copyrighted works and to maybe train a sub-system to detect it and stop the main chatbot from responding with it.

      • mryessir@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I am not familiar with any judicative system. It sounds to me that OpenAI wants to get the evidence the NYT collected beforehand.