Our gracious host @JonahAragorn asks that we sign up on a server on join-lemmy.org or sign up on kbin.social instead of directing everybody to use the lemmy.one server specifically, in order to distribute the load.

Thank you!

  • empireOfLove@lemmy.one
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    It looks like about the 1k active user mark is where the Lemmy instance starts to use too many resources. lemmy.ml has blown past that with 1.7k active and almost 40k total registered users, and they’re having a lot of performance issues over there.

    Hope they can build a good scaling framework into the Lemmy backend pretty damn quick, because nobody is going to survive the Reddit influx otherwise.

    • BurningnnTree@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 years ago

      To clarify, you’re saying this is an inherent issue in the Lemmy code, not an issue with hosting or whatever? So it’s not currently possible for a Lemmy instance to handle a couple thousand users?

      • empireOfLove@lemmy.one
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        I have a very limited understanding, but, yes. A few thpusand is ok so far, but The database managing post contents becomes the bottleneck as it simply cannot be updated fast enough. No single instance can handle that much traffic, it has to be horizontally scaled somehow.

        • Randall_Potato@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 years ago

          I wish I had the time to dedicate to learning enough to know what you’re talking about. This kinda stuff sounds so interesting.

          • empireOfLove@lemmy.one
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Ok, so the best analogy is: a database is a warehouse. Every user has a shelf in this warehouse, and they add and take stuff to it at random. The workers in the warehouse move stuff in and out of the doors as orders (user requests) come in and out.

            You can build the warehouse taller and wider to hold more shelves (vertical scaling of hardware), but only to a point before the forklift trucks can’t reach and driving across the warehouse for one order would take too long.
            You can similarly add more workers (CPU power) and more doors (faster storage/memory) to your warehouse to handle more orders at one time, but these also have limits because the warehouse is only so big. And there’s eventually a point where no amount of workers or doors or shelf space will improve throughput, because the roads around the warehouse are clogged with trucks.

            At this point the obvious solution is horizontal scaling- build a new warehouse somewhere else nearby, and a small office building (load balancer) that directs new orders to each of the warehouses evenly. Then you have the office occasionally check in with both warehouses to make sure they both have the same stuff to fulfill any given order (synchronizing instances). And as orders continue to increase, you can just keep adding more and more horizontal warehouses- all of which are individually somewhat small and simple, but effectively infinitely scalable compared to the vertical mega-warehouse we tried to build originally.

            Lemmy tries to build in the horizontalness by having separate instances federate and share data automatically without the need for that “front office”. However the issue still stands that individual instances are going to need to handle hundreds of thousands of users for the Lemmy ecosystem to really thrive, and that is going to take a ton of back end database engineering to allow individual instances to scale as well as the fediverse network as a whole.