• space@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    2
    ·
    9 months ago

    Let’s be real, they did it because they didn’t want people training AI models without paying them. They didn’t give a shit about 3rd party apps.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 months ago

      People that want to train AI models on Reddit content can just scrape the site, or use data from archive sites that archive Reddit content.

      • AnyOldName3@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 months ago

        The archive sites used to use the API, which is another reason they wanted to get rid of it. I always found they were a great moderation tool as users would always edit their posts to no longer break the rules before they claimed a rogue moderator had banned them for no reason, and there was no way within reddit to prove them wrong.

          • AnyOldName3@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            Yeah, the Wayback Machine doesn’t use Reddit’s API, but on the other hand, I’m pretty sure they don’t automatically archive literally everything that makes it onto Reddit - doing that would require the API to tell you about every new post, as just sorting /r/all by new and collecting every link misses stuff.

            • dan@upvote.au
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              You don’t need every post, just a collection big enough to train an AI on. I imagine it’s a lot easier to get data from the Internet Archive (whose entire mission is historical preservation) than from Reddit.

              The thing I’m not sure about is licensing, but it seems like that’d the case for the whole AI industry at the moment.

    • loutr@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      9 months ago

      I’m convinced there was more to it. Otherwise they’d have worked with the app devs to find a mutually beneficial solution. Instead they just acted like massive, deaf assholes through all the drama, blackout…

      Of course, it’s totally possible they’re also insanely stupid, arrogant assholes.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 months ago

        It makes the ridiculous prices they were quoting make sense. Because giving API access is giving a key to all that data, which they can then turn around and covertly sell access to. So they priced it so that they wouldn’t have to sell the data at wholesale value to apps that could turn around and undercut their AI training prices.

        It’s the same reason why they were considering blocking Google search because Google (or any search engine) uses a crawler to look at all that data and you can’t allow Google to continue without leaving it open to any other crawler, like say an AI training data crawler.

        Same thing with any push to make users log in to view comment threads (and it wouldn’t surprise me if that’s what Musk was thinking of when he was doing/considering the same with Twitter). If only users can access the comment data, then it’s easier to see when a user is reading too much data, or rate limit them. Also the move towards only showing a bit of a comment thread by default.

        But that data is the only reason people visit the site and provide more data, so I don’t see this problem ever fully going away for them. The problem they are trying to solve is how to give access to enough data to engage users enough to provide more data while preventing AI trainers from getting that same data for free. If I wanted to, I bet I could write something that would fill a database with comment data and metadata while browsing normally in less than a day and then a bit longer to automate the browsing entirely (depending on what kind of bot detection the site uses). There’s no way for Reddit to stop the manual browsing version and the automated one will be an arms race that will also end in no way for it to be stopped because it emulates a real user to an undetectable level.