I hate how installing or removing (or even updating) a flatpak causes the whole software center to completely refresh, and it doesn’t keep its state so if you were in the middle of a search or scrolled down through a category… say goodbye to it.
I hate how installing or removing (or even updating) a flatpak causes the whole software center to completely refresh, and it doesn’t keep its state so if you were in the middle of a search or scrolled down through a category… say goodbye to it.
Well, that took a lot more blood, sweat, and tears than I thought it would.
Usually when performing an update, I do the following:
docker-compose.yml
file for both the lemmy and lemmy-ui containersEverything was going to plan until step 4, the database migrations. After about 30 minutes of database migrations running, I shut off external access to the instance. Once we got to the hour and a half mark, I went ahead and stopped the VM and began rolling back to the snapshot… Except normally a snapshot restore doesn’t take all that long (maybe an hour at most), so when I stepped back 3 hours later and saw that it had performed about 20% of the restore that’s where things started going wrong. It seemed like the whole hypervisor was practically buckling while attempting to perform the restore. So I thought, okay I’ll move it back to hypervisor “A” (“Zeus”)… except then I forgot why I initially migrated it to hypervisor “B” (“Atlas”) which was that Zeus was running critically low on storage, and could no longer host the VM for the instance. I thought “Okay, sure we’ll continue running it on Atlas then, let me go re-enable the reverse-proxy (which is what allows external traffic into Lemmy, since the containers/VM is on an internal network)”… which then lead me to find out that the reverse-proxy VM was… dead. It was running Nginx, nothing seemed to show any errors, but I figured “Let’s try out Caddy” (which I’ve started using on our new systems) - that didn’t work either. It was at that point that I realized I couldn’t even ping that VM from its public IP address - even after dropping the firewall. Outbound traffic worked fine, none of the configs had changed, no other firewalls in place… just nothing. Except I could get 2 replies to a continuous ping in between the time the VM was initializing and finished starting up, after that it was once again silent.
So, I went ahead and made some more storage available on Zeus by deleting some VMs (including my personal Mastodon instance, which thankfully I had already migrated my account over to our new Mastodon instance a week before) and attempted to restore Lemmy onto Zeus. Still, I noticed that the same behavior of a slow restore was happening even on this hypervisor, and everything on the hypervisor was coming to a crawl while it was on-going.
This time I just let the restore go on, which took numerous hours. Finally it completed, and I shut down just about every other VM and container on the hypervisor, once again followed my normal upgrade paths, and crossed my fingers. It still took about 30 minutes for database migrations to complete, but it did end up completing. Enabled the reverse-proxy config, and updated the DNS record for the domain to point back to Zeus, and within 30 seconds I could see federation traffic coming in once again.
What an adventure, to say the least. I still haven’t been able to determine why both hypervisors come to a crawl with very little running on them. I suspect one or more drives are failing, but its odd for to occur on both hypervisors at around the same time, and SMART data for none of the drives show any indications of failure (or even precursors to failure) so I honestly do not know. It does however tell me that its pretty much time to sunset these systems sooner rather than later since the combination of the systems and the range of IP addresses that I have for them comes out to about ~$130 a month. While I could probably request most of the hardware to be swapped out and completely rebuild them from scratch, it’s just not worth the hassle considering that my friend and I have picked up a much newer system (the one mentioned in my previous announcement post and with us splitting the cost it comes out to about the same price.
Given this, the plan at this point is to renew these two systems for one more month when the 5th comes around, meaning that they will both be decommissioned on the 5th of February. This is to give everyone a chance to migrate their profile settings from The Outpost over to The BitForged Space as both instances are now running Lemmy 0.19.0 (to compare, the instance over at BitForged took not even five minutes to complete its database migrations - I spent more time verifying everything was alright) and to also give myself a bit more time to ensure I can get all of my other personal services migrated over, along with any important data.
I’ve had these systems for about three years now, and they’ve served me quite well! However, its very clear that the combination of the dated specs, and lack of setting things up in a more coherent way (I was quite new to server administration at the time) is showing that its time to mark this chapter, and turn the page.
Oh, and to top off the whole situation, my status page completely died during the process too - the container was running (as I was still receiving numerous notifications as various services went up and down), however inbound access was also not working either… So I couldn’t even provide an update on what was going on. I am sorry to have inconvenienced everyone with how long the update process took, and it wasn’t my intention to make it seem as if The Outpost completely vanished off the planet. However I figured it was worth it to spend my time focusing on bringing the instance back online instead of side-tracking to investigate what happened to the status page.
Anyways, with all that being said, we’re back for now! But it is time for everyone to finish their last drink while we wrap things up.
At some point, yes - while I don’t have a concrete date of when The Outpost will be officially decommissioned (as the server it’s running on still has plenty of things that I can’t move over just yet), you might’ve noticed that the performance of the site is pretty shaky at times.
Sadly, that’s pretty much just due to the older hardware in the server, I’ve tried for the last four months to work around it by trying to configure various tweaks for Lemmy and postgres (the database software - which is where the heart of the issues come from), but it hasn’t had much of an effect. I’m pretty much out of options of what I can try at this point, since it not only affects Lemmy but all of the other stuff that I run on the server for myself (hence why I’ve decided to invest in a better system).
So you don’t have to move over right this second, but I would recommend it sometime in the future. The plan is to at the very least wait till Lemmy 0.19 comes out since it should let you migrate your subscribed communities (and blocked ones, if any) as far as I’m aware - but it won’t transfer over posts and comments sadly. They’re still working out some roadblocks for 0.19, so I suspect it won’t be out this month (they don’t have an estimate just yet of a release date).
Generally it’s just through my distro, it’s always occurred since I’ve used KDE unfortunately (since that was one of my first thoughts). This has been across Fedora (and derivatives), Nix, Arch, and Kubuntu.
Traversing a motherboard sounds like it would be interesting!
As far as I know, if you don’t have it on Steam then yes.
The Steam build still gets all of the updates to the game… for now, so if you grabbed it on Steam before it was delisted you can continue to play through that.
I used to justify it with “I’ve had a shit day, I deserve to be able to have something for the convenience” - not to mention, I don’t have a car so realistically it was “Do I want fast food or not”.
Then I started to realize that every day tends to be a bad day for me, due to a multitude of reasons. I live paycheck to paycheck (which is why I don’t have a car in the first place) and the amount I was spending on takeout was way too high.
Now the only time I do so is on Fridays because my workplace lets us spend $25 on their tab just for joining the weekly staff meeting. Aside from that, I might order a takeout once, maybe even twice, during a pay period as a “congrats for making it through last month” but I’d like to even stop doing that ideally.
This doesn’t read as a global Blocklist for all Android phones in the world. It reads more as a local database/API for blocked numbers on your phone.
So blocked numbers would theoretically be applied to your messages apps and other “telephony” based apps that use phone numbers such as WhatsApp (should said apps implement the API).
Google already seems to have a spammer database for numbers, though I’m not sure if that applies to just Fi users, Pixel users, or anyone who uses the Google Phone app. If I have call screen disabled, I’ll see numbers on an incoming call have a red background with a “likely spam” description.
But based on the comments on this post, I feel as if I’ve overlooked something in the article here (I’ve just woken up so it wouldn’t surprise me) - is there a mention of it being a worldwide list?
It would be an alright show… If it didn’t use the Halo name and was written to just be another science fiction/fantasy TV show.
But unfortunately I don’t think the show was ever made for hardcore Halo fans - whether that’s because of the writers or just Paramount going over the writer’s heads I couldn’t say.
Once I woke up a bit more I had another look at the article, and this phrasing certainly makes it sound like it needs approval at some point:
Due to a licensing dispute between NVIDIA and Activision in 2020, GeForce NOW lost access to all Activision-Blizzard games.
Perhaps though it’s a case of “Better to ask for forgiveness than permission” and they just add games until someone tells them to pull it off, I’m not sure. It’s been 4+ years since I looked into GFN, I tried it out during the beta period but I don’t believe I’ve used it since then.
They might’ve done so out of necessity. I don’t know if the dev(s) of the Simple Tools apps were working on it full time, but if they were and just not enough contributions were coming in from it… Well everyone has to eat.
As the saying goes, “everyone has their price”. It’s easy to condemn the developers for their choice until you’re in the exact same scenario as they were. Whether that’s because they were starving, or even just offered enough money to make their lives a lot easier - not too many people would turn it down.
I’m a bit surprised to see that you disagreed with the “NixOS is hard to configure” bit, but then also listed some of the reasons why it can be hard to configure as cons.
By “configure”, they probably didn’t mean just setting up say, user accounts, which is definitely easy to set up in Nix.
The problems start to arise when you want to use something that isn’t in Nixpkgs, or even something that is out of date in Nixpkgs, or using a package from Nixpkgs that then has plugins but said plugin(s) that you want aren’t in Nixpkgs.
From my experience with NixOS, I had two software packages break on me that are in Nixpkgs - one of them being critical for work, and I had no clue where to even begin trying to fix the Nixpkg derivation because of how disorganized Nix’s docs can be.
Speaking of docs inconsistencies you still have the problem of most users saying you should go with Flakes these days, but it’s still technically an experimental feature and so the docs still assume you’re not using Flakes…
I was also working on a very simple Rust script, and couldn’t get it to properly build due to some problem with the OpenSSL library that one of the dependent crates of my project used.
That was my experience with NixOS after a couple of months. The concept of Nix[OS] is fantastic, but it comes with a heavy cost depending on what you’re wanting to do. The community is also great, but even I saw someone who heavily contributes to Nixpkgs mention that a big issue is only a handful of people know how Nixpkgs is properly organized, and that they run behind on PRs / code reviews of Nixpkgs because of it.
I’d still like to try NixOS on say, a server where I could expect it to work better because everything is declarative such as docker containers - but it’s going to be a while before I try it on my PC again.
Realistically, a lot of relationships are “situational” (especially at that age) - but that doesn’t erase the fact that they existed in the first place.
Correct on all accounts. Just to be more precise, I’m not placing any blame on the players in my prior comments - the blame goes to GFN and Activision since the player expects to be able to play a game that they’ve paid for, on a service that they have paid for.
Right, I didn’t mean to imply that playing on GFN was cheating by any means - I probably should’ve worded that a bit better.
I meant more of “If Call of Duty explicitly allowed GFN to add the game, then players who play via GFN shouldn’t have a chance to be banned just for playing through it”
Doesn’t the publisher of the game have to approve for a game to be put on GeForce Now?
I mean, don’t get me wrong - I know anti cheat detection has never been perfect, but you’d think this would be something they heavily try to make sure they get right.
No VPN, it’s strange because I haven’t had a problem with any other services that use IP geolocation (which I assume is what KDE uses) - even Gnome’s auto location tool seems to work fine.
Yep, I modded my switch, dumped the keys and my games and went “Now what?” and after playing via Yuzu on my PC I realized this was the only way I really wanted to play the few Switch games I enjoy.
Every now and then I’ll boot into the stock firmware to play Mario Kart with some friends when they want to play, and that’s it.
Was playing it a bit in the morning while it was slow at work, seems fantastic so far!