• LittleBorat2@lemmy.ml
    link
    fedilink
    arrow-up
    36
    ·
    9 months ago

    This data was out in the open for a decade and still is. People could train their llm without problems.

      • Corkyskog@sh.itjust.works
        link
        fedilink
        arrow-up
        13
        ·
        9 months ago

        And then everyone started deleting accounts, comments and even rewriting and poisoning their comments. The data was way better before the API change.

        • pixxelkick@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          9 months ago

          Do you actually think this has any impact? That’s silly.

          Reddit’s servers have the original copy of every single post made, undoubtedly, and everytime you edit your post, they store that copy too.

          So not only has everyone “poisoned” their data ineffectively, they literally have created training data of “before” vs “after” poisoning to compare the two for training the LLM against poisoned data.

          Whoever buys the right to that is going to have a pretty huge goldmine, and perhaps they will rent it out, or perhaps they’ll use it themselves.

    • pixxelkick@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      Not legally / free.

      And yes, that very very much matters if you intend to actually sell the service to companies that they themselves dont want to get hit in the crossfire of potential lawsuits for building their products on top of stolen info.

      So if you can own the data itself (via buying reddit), you now have an ENORMOUS quantity of prime training data that you’re investors and potential customers know is legally clean, because you literally own it.