• Lemminary@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    12
    ·
    10 months ago

    What a colorful mischaracterization. It sounds clever at face value but it’s really naive. If anything about this is deceptive, it’s the lengths that people go to to slander what they dislike.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      10 months ago

      Actually content laundering is the best term I’ve heard to describe the process. Just like money laundering, you no longer know the source and know it’s technically legal to use and distribute.

      I mean, if the copyrighted content wasn’t so critical, they would train models without it. Their essentially derivative works, but no one wants to acknowledge it because it would either require changing our copyright laws or make this potentially lucrative and important work illegal.

      • Lemminary@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        10 months ago

        Content laundering is not a good way to describe it because it’s misleading as it oversimplifies and mischaracterizes what a language model actually does. It’s a fundamental misunderstanding of how it works. Training language models is typically a transparent and well-documented process as described by the mountains of research over the past decades. The real value comes from the weights of the nodes in the neural network and not the source that it spits out in its entirety when it was trained. The source material is evaluated and wholly transformed into new data in the form of nodes and weights. The original content does not exist as it was within the network because there’s no way to encode it that way. It’s a statistical system that compounds information.

        And while LLMs do have the capacity to create derivative works in other ways, it’s not all that they do, or what they always do. It’s only one of the many functions that it has. What you say would probably be true if it was only trained on a single source, but that’s not even feasible. But when you train it on millions of sources, what remains are the overall patterns of language within those works. It’s much more sophisticated and flexible than what you describe.

        So no, if it was cut and dry there would be grounds for a legitimate lawsuit. The problem is that people are arguing points that do not apply but sound reasonable when they haven’t seen a neural network work under the hood. If anything, new laws need to be created to address what LLMs do if you’re so concerned about proper compensation.

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          10 months ago

          I am familiar with how LLMs work and are trained. I’ve been using transformers for years.

          The core question I’d ask is, if the copyrighted material isn’t essential to the model, why don’t they just train the models without that data? If it is core to the model, then can you really say they aren’t derivative of that content?

          I’m not saying that the models don’t do something more, just that the more is built upon copyrighted material. In any other commercial situation, you’d have to license/get approval for the underlying content if you were packaging it up. When sampling music, for example, the output will differ greatly from the original song, but because you are building off someone else’s work you must compensate them.

          Its why content laundering is a great term. The models intermix so much data that it’s hard to know if the content originated from copyrighted materials. Just like how money laundering is trying to make it difficult to determine if the money comes from illicit sources.

    • Jilanico@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      10 months ago

      I feel most people critical of AI don’t know how a neural network works…

      • Lemminary@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        10 months ago

        That is exactly what’s going on here. Or they hate it enough that they don’t mind making stuff up or mischaracterizing what it does. Seems to be a common thread on the Fediverse. It’s not the first time this week I’ve seen it.