• bjorney@lemmy.ca
    link
    fedilink
    arrow-up
    2
    arrow-down
    11
    ·
    6 months ago

    I know everyone here likes to circle jerk over “le Reddit so incompetent” but at the end of the day they are a (multi) billion dollar company and it’s willfully ignorant to infer that there isn’t a single engineer at the company who knows how to measure string similarity between two comment trees (hint: import difflib in python)

    • icydefiance@lemm.ee
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      6 months ago
      1. To compare every comment on reddit to every other comment in reddit’s entire history would require an index, and if you want to find similar comments instead of exact matches, it becomes a lot harder to do that efficiently. ElasticSearch might be able to do it, but then you need to duplicate all of that data in a separate database and keep it in sync with your main database without affecting performance too much when people are leaving new comments, and that would probably be expensive.
      2. Comparing combinations of comments is probably impossible. Reddit has a massive number of comments to begin with, and the number of possible subtrees of those comments would just be absurd. If you only care about comparing entire threads and not subtrees, then this doesn’t apply, but I don’t know how useful that will be.
      3. Programmers just do what they’re told. If the managers don’t care about something, the programmers won’t work on it.
      • bjorney@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        6 months ago

        To compare every comment on reddit to every other comment in reddit’s entire history would require an index

        You think in Reddit’s 20 year history no one has thought of indexing comments for data science workloads? A cursory glance at their engineering blog indicates they perform much more computationally demanding tasks on comment data already for purposes of content filtering

        you need to duplicate all of that data in a separate database and keep it in sync with your main database without affecting performance too much

        Analytics workflows are never run on the production database, always on read replicas which are taken asynchronously and built from the transaction logs so as not to affect production database read/write performance

        Programmers just do what they’re told. If the managers don’t care about something, the programmers won’t work on it.

        Reddit’s entire monetization strategy is collecting user data and selling it to advertisers - It’s incredibly naive to think that they don’t have a vested interest in identifying organic engagement

        • icydefiance@lemm.ee
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          6 months ago

          You think in Reddit’s 20 year history no one has thought of indexing comments for data science workloads?

          I’m sure they have, but an index doesn’t have anything to do with the python library you mentioned.

          Analytics workflows are never run on the production database, always on read replicas

          Sure, either that or aggregating live streams of data, but either way it doesn’t have anything to do with ElasticSearch.

          It’s still totally possible to sync things to ElasticSearch in a way that won’t affect performance on the production servers, but I’m just saying it’s not entirely trivial, especially at the scale reddit operates at, and there’s a cost for those extra servers and storage to consider as well.

          It’s hard for us to say if that math works out.

          It’s incredibly naive to think that they don’t have a vested interest in identifying organic engagement

          You would think, but you could say the same about Facebook and I know from experience that they don’t give a fuck about bots. If anything they actually like the bots because it looks like they have more users.