• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle



  • Estinos@lemm.eetoProgrammer Humor@lemmy.mlBackend devs
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Omg, thanks for this. 😂 I could really have used it in those teams where backend devs thought that they didn’t have to try and use their feature in the browser : if the test suite pass, then it’s good. I hope I’ll never be in such a team again, but I keep this pic just in case. :)




  • One other thing to consider is that if you goal is to keep the image up for 5, 10, 20 years, there’s actually way more chances it happens on imgur than on your self hosted webserver : one day you’ll be bored of maintaining it, and that’ll be that. A good part of the web from the 2000’ was deleted because we were hosting our blogs on our own servers, until we didn’t. In an ideal world, we would have a p2p web, where content is distributed by users of the site and it stays up for as long as one person seeds it, like the Beaker Browser tried to do with dat, but this is far from what most people use today. For now, the power of self-hosting is more suitably directed at providing apps you will use for yourself than to publish things meant to stay out in the world. Unless of course you’re ready to commit to that, knowing from the get go it will be a challenge.


  • I’m not sure to get this. Is the point of the article to install Nicotine+ into docker just to download music on a server? That seems overkill, I would rather use Nicotine+ on my laptop, then rsync my Music directory to the server. That way I have backups, and I’m not limited to music coming from Soulseek. Bandcamp catalog is surprisingly complete, since a few years (relatively to 5-10 years ago, where it feels you were going to Bandcamp to listen to “Bandcamp artists”, like Jamendo), you can download drm-free tracks and it allows to support artists you like. And of course, you can also download music from youtube with yt-dlp, provided you spend some time to tag it if you care about that.


  • Aren’t those specs that would be better served by designing your own tool than introducing a complexe codebase that solves way more than the feature set you’re after and whose knowledgeable people will be difficult to find and hire?

    I would say what you really need is to take the time of specifying what are the very concrete features you put in the idea of a community management platform, then see if there is not a simple path to implement them incrementally (protip : remove the “platform” in the label, it immediately looks less scary and it serves no purpose anyway, that’s a word you use to impress investors).

    Of course, I say that from an external and naive viewpoint after reading your short description, sorry if I’m far off.



  • It does indeed sound amazing. I wanted to go even further and implement my own activitypub based features on my webserver, in order to make it behave exactly like I want, while still being able to interact with Lemmy and Mastodon, how cool would that be?

    But then I stopped, considering the privacy considerations. I’ve spent years removing myself from the web, from a time where we were considering that being “highly googlable” was cool, and then I came back with perfectly sillo’d identities, only using thematic forums (or alike) and never linking my profiles with one another, so that I can’t be profiled easily. Putting all my social activity in one place where one can just GET /outbox on a server known to belong to me to get all my activities sounds quite a step backward from that. I’m still considering it, though, because I really want to be in control of my data. But I need to figure a way to do it that would not be a regression in privacy.



  • The trick with SD cards is to not buy the cheap ones, they are horribly fragile (similarly faulty cheap usb sticks are more and more common, sadly). I’ve been using a 512Gb Sandisk Ultra microSDXC for 2 years, and it’s still rocking. It would not be a problem if it was to fail, anyway : I have several other backup storages, and I update that one daily, so I’ll know immediately when it fails, it’s not like I’ll realize it the day I’ll need it. On the plus side, it’s small enough to fit in a small 3d printed object, so that I can both keep it on myself and keep it hidden, just in case (it’s fully encrypted anyway, but still, better safe than sorry).




  • This. Also, regarding the time it will require, it’s usually (for anything self-hosted, I haven’t tried those two) a downward curve : it takes a lot of time to understand how things work initially, and after that it takes less and less time, to the point where the only things you have to do is to manage updates once in a while. With an occasional big day because something broke. Of course, I’m saying this because you said it’s just for you and your friends, if you start doing moderation, the time required quickly explode up. :)

    A note regarding self hosting at home : even if you figure you do have a static IP and can route traffic to your host, it’s a bit of a roulette, because ISPs consider that high incoming traffic on listening ports is a suspicious activity which probably means you’re a terrorist or something, normal people don’t do that, how dare you. I’ve had a ISP block traffic between friends and me with no warning and a laughably ignorant support (if anyone is from France, it’s SFR ; don’t use them for self-hosting with external access) before giving up and subscribing to a professional ISP (one meant for businesses). Your mileage may vary, but the annoying thing is that you won’t know it before you’ve got used to your services.


  • The main way of reducing the amount of energy is the CPU governor in the kernel. It can be set to maximize performances (it will rather scale the frequency in the high values), to maximize energy efficiency (it will scale the CPU down) or on demand (it will scale down, until some work is asked from the CPU, at which point it will scale up until it’s not needed anymore). Personally, I use on demand scaling, and then I use cpupower to set up the maximum frequency, so that I have both scaling and energy efficiency. Here is a page from Archlinux wiki about tools you may want (it usually is helpful in other distros as well).

    And then of course, there is good judgment. You will consume less energy if you’re doing less work, so a minimalist desktop with very few apps running at the same time will consume less energy than ubuntu on Gnome with several rails app running with constant background jobs being processed.