Things have been incredibly unstable there. Until things stabilise, they should force the traffic elsewhere.
I agree. Wants the point of a decentralised platform if most of it’s users are on a single instance.
Plus it’s not a good look from a new users perspective if the platform appears down every few hours.
Wants the point of a decentralised platform if most of it’s users are on a single instance.
It’s so that users have a choice of joining different instances. If .world is particularly well managed and provides amenities for users and has the capacity, there’s no reason that new users shouldn’t join.
The junk communities in every instance are a problem.
It clearly doesn’t have the capacity and until certain pull requests are merged, the capacity for an instance is ~2000 users, world is way over that.
This can’t be true. Lemmy world is over 100k users now. And it depends on how many are active.
But yes, it would be very beneficial to the entire lemmy network if many more instances could grow and get active communities. Right now it’s like every community is from Lemmy.world or Lemmy.ml. It’s stupid.
Edit: And I just checked out lemmy.world local… There are tons of posts that doesnt even federate to other instances. So the stability problems are pretty much affecting the entire fediverse now since they are so big.
Look at Lemmy world local using Voyager:
Which is why it keeps dropping. The way Lemmy is designed, a bunch of stuff is kept in memory. Like loading a post with a lot of comments spikes the CPU and memory, A LOT A LOT. Things are of course improving rapidly, but until fixes like that and the federation queue land, it’s easy to bring a server so over capacity to its knees.
Yeah ok. Well honestly I think it’s absurd that they have so many users and communities, and I hope people take this time to spread out.
It’s almost comic how everyone is on Lemmy.world when there is over a thousand other instances with zero problems.
I recommend Lemmy.today and specially if you are not afraid to post stuff. Would be nice to get some more local conversions going.
Problem is, most people are going to take one look at join-lemmy and not know what to do. It’s so much easier to point people at lemmy.world or lemmy.ml.
And until very recently, small servers have had a ton of trouble finding communities on the big servers. That seems to be mostly resolved, though I still get community not found errors when I first navigate to a new community, but refreshing takes care of this. New users aren’t going to know what to do with this.
Granted, many (most?) people seem to think these problems are a good thing because it keeps the normies out, but forums are nothing without people.
It has now federated actually, but took a long time.
As I understand it, Lemmy.world’s maintainer wants to own the biggest Lemmy server ever, just like they own one of the biggest Mastodon servers. It’s a feature, not a bug.
Things have been incredibly unstable there.
I wish lemmy.ml (also unstable) or lemmy.world would hand out a (nearly) full copy of the database so we can get more analysis done on PostgreSQL performance behaviors. Remove the private comments and password /2fa/user, or whitelist only comments/posts/communities/person tables - but most everything else should already be public information that’s shared via the API or federation anyway. it’s the quantity, grouping, and the age of the data that’s hard to reproduce in testing. And knowledge of other federated servers, even data that may have been generated by older versions of Lemmy that new versions can’t reproduce.
It’s been over 60 days of constant PostgreSQL overload problems and last week Lemmy.ca made a clone of their database to study offline with AUTO_EXPLAIN which surfaced a major overload on new comments and posts related to site_aggregates counting (it was counting each new post/comment against every known server, not just the single database row for a server).
I have an account over on World too, and every major Lemmy server I use throws errors with casual usage. It’s been discouraging, I haven’t visited a website with this many errors in years. Today (Sunday) has actually been better than yesterday, but I do not see many new postings being created on lemmy.ml today.
[This comment has been deleted by an automated system]
subscribe to every community, and let federation load overwhelm your server.
Did that, takes lots of time to wait for the content to come in… and there is no backfill. Plus I suspect that the oldest servers (online for several years) have some migration/upgrade related data that isn’t being accounted for.
Wasn’t this how kolektiva.social got in trouble 😂
What is the story? never heard of it.
Unencrypted backup and an FBI raid.
https://kolektiva.social/@admin/110637031574056150 ( https://web.archive.org/web/20230701101423/https://kolektiva.social/@admin/110637031574056150 if you have trouble accessing it).
ok, I did know about that, just didn’t memorize the name. I’m assuming only private messages and user account info (email address) are the real concern in terms of exposure? It’s mostly a public posting thing, or not?
Email address, IP, and posting information.
However… consider the… uh… “charter” for the instance:
Kolektiva is an anti-colonial anarchist collective that offers federated social media to anarchist collectives and individuals in the fediverse. For the social movements and liberation!
There’s some action to the anti-establishment and individuals on the site may have participated in violent activities.
They’ve got a peer tube instance too - https://kolektiva.media/ … and want to make a bet if any of the accounts on the mastodon instance are the same as the ones on the peer tube instance and if any of the videos on there are incriminating… and you’ve got the email address and IP address of the person logging into the account which can then be used to identify them.
The admins won’t sell us out to the feds is one thing. The admins won’t work on an unencrypted version of the database that exposes personal information (and get raided) is another.
Working on a backup of live data without sanitizing personal information first is a risk that every DBA at a big company lectures the programmers about.
You are welcome to go to another instance to reduce lemmy.world load.
I’m not on world. I don’t even have a world login. 🤷🏾♂️
I don’t think you understand how the fediverse works of you think that solves the problem - lemmy.world being unstable effects its ability to federate properly with other instances
Badly managed popular instances unfortunately affects the whole fediverse
There has also been problems with federated copies of communities not getting all the actions. I added testing code to demonstrate that comment deletes were not going out to federation peers. Comparing copies of data between instances for the same community shows some overlooked problems. Still have more tests to add.
Are you really trying to gatekeep the internet?
No, I’m trying to find a solution to a very real problem. World has had at least partial outages every day for a week now. It affects everyone when it gives the impression that things don’t work and people aren’t able to create content to attract new people to the wider platform. But have an upvote anyway.
What?
People can subscribe to other instances or even host their own. If the majority of users end up on the same instance then what’s the point of decentralization? Even worse if the traffic it generates for that instance makes it unstable.
Should the admin do nothing until the instance is mostly down with windows of uptime throughout the day?
A big part of the problem is that a new instance starts with zero database content and PostgreSQL performs fine with the way Lemmy organizes the data. But then there isn’t anything for people to read, and search is only going to pick up local stuff.
There’s no perfect solution but I certainly didn’t move to a decentralized platform only to see it be intentionally centralized through inaction.
The quantity of users on Lemmy I still consider to be pretty low, the performance bugs need to be addressed on a big server. Bugs like not having a WHERE clause on an UPDATE hitting 1500 rows in a table (one row per server) instead of 1 single row… these need to be shaken out.
The errors of the overload themselves have been a way to throttle growth of the big servers. People were not able to insert new posts and comments into Lemmy.ml - reducing outbound federation activity too, and they went to other servers. This went on all of June and July.
deleted by creator