• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • A merge from upstream once a day, at the beginning of the day.

    I’m working on a DevOps setting, and even though we’re a small team, we have about two to three changes going through the pipeline a day.

    If you keep your fork too long without syncing, it just get more complicated to merge, and more importantly if you need help from the upstream change author they’ll have moved on to another subject and the change won’t be as fresh in their mind as if you had merged the day after they pushed it.


  • I’ve had that kind of reaction - on rebases also - and most times it was in fact a code smell pointing to a case of spaghetti code.

    If you get to the point that you fear upstream merges/rebases into your WIP, stop for a second and ask yourself if maybe that might be an issue with too much interpendencies inside the code itself. Code should be as close to an directed acrylic graph as possible. (doesn’t count, I was not speaking of git! :b )




  • Each and every line of code you write is a liability. Even more so when you wrote it for someone else. You must always be able to rebuild it from source, at least as long as your client expect the software to work. If you feel it’s not worth it, you probably low-balled the contract. If you don’t want to maintain code, have the client pay a yearly maintenance fee, give the code and the responsibility to maintain it to your client at the end of development, or add a time limit to it’s support.

    There’s no “maintenance mode” software: either it’s in use and must be kept updated with regard to it’s execution environment, or it’s not used anymore and can be erased and forgotten. Doing differently opens too much security issues, which shouldn’t be acceptable for us all as a trade.


  • Forewarning : ops here, I’m one of the few the bosses come to when the “quick code” in production goes sideways and the associated service goes down.

    soapbox mode on

    Pardon my french but that’s a connerie.

    Poorly written code, however fast it has been delivered, will translate ultimately into a range of problems going from customer insatisfaction to complete service outage, a spectrum of issues far more damageable than a late arrival on the market. I’d add that “quick and dirty code” is never “quick and dirty code with relevant, automated, test coverage”, increasing the likelihood off aforementioned failures, the breadth of their impact and the difficulty to fix them.

    Coincidentally , any news about yet another code-pissing LLM bothers me a tad, given that code-monkeys using such atrocities wouldn’t know poorly written code from a shopping list to begin with, thus will never be able to maintain the produced gibberish.




  • Others has answered the specific cases where TTM is paramount.

    When time is less of an issue, in my experience it’s in no particular order a mix of:

    • product owners or similar role wanting “everything and right now” for no reason whatsoever, except maybe some bonus;
    • bosses bossing around to try and justify their existence instead of easying progress ;
    • developers being not much more than code jockeys with a tendancy to develop by StackOverflow copy/paste;
    • operations lacking time, resources or knowledge to build a proper CI/CD pipeline - when it’s not an issue of operations by ServerFault copy/paste;
    • experts (DBA, virtualization, middlewares) being kept out of the project, and only asked for advice when things go terribly wrong later.

    All in all, instead of short term profit, it’s a lack of not-so-long term vision and engagement from everyone involved. They just don’t care.

    Yeah, I’m the one in charge of fixing the mess, why you ask?








  • Unplug your mouse. Seriously. Do it. It might sound like the “kicking and screaming” method but you’ll learn to rely on your keyboard even for GUI tools and you’ll vastly improve how fast you navigate your computer. You should find yourself more and more in the terminal, obviously, but you may learn also some nice tricks with everything else.



  • Given the state of this world, there’s better things to do than to add such gimmicks in EV. There’s enough energy and matter wasted in useless widgets to at least spare the new generation of such stupidity. I could get behind a new kind of recycled ICE vehicles, operating on captured-carbon fuel and paid at a premium for those who need to love the rumble of a well-tuned engine, but that should stay a fringe hobby.

    Time’s for a compromise on the length of the fuse is over, we, as a whole, should be focussing on preventing the climate bomb to do too much damages to humans.

    Or maybe we should double down, extract and burn even more fuel, produce and discard even more plastic, without forgetting to have it circle five times the Earth before before it hands in the customers’ hands: it wouldn’t be the first mass extinction and the planet will get through. Us humans, though…


  • Simple: because it goes against the KISS principle. The GNU tools that constitute the user interface to the system comes from a philosophy that started with Unix: simple tools, doing one thing well, communicating through “pipes” - i.e. the output of one tool is supposed to be used as the input of the next one.

    Such philosophy allows to assemble complex logic and workflows with a few commands, automating a lot of mundane tasks, but also allowing to work at a large scale the same way you would work on a few files or tasks.

    Graphical tools don’t have such advantages:

    • UI are rarely uniform in their presentation or logic, as there’s so much way to present options and choices;
    • Apple did something nice in the way of automating with AppleScript, but I’ve not encountered anywhere else. GUIs are rarely automatable, which means you’ll need some clicking and pushing buttons if a task has to be repeated - or the GUI has to be altered to be able to replay a set of commands for multiple items;
    • interconnecting different GUIs so that they can exchange data is just impossible. You usually end up with files in dedicated format, and the needs to massage data from one format to another to be able to chain tasks from different GUIs
    • more importantly, command line work with minimal bandwidth and tooling on the client side. Tmux, Mosh and similar tools allow to work with an intermittent connection, and have a very low impact on the managed system;
    • in some specific fields - notably embedded and industrial systems - you just can’t justify allocating resources just for a graphical environment. On these system, CLI is as powerfull as on a full fledged server, and don’t requires stealing precious resources from the main purpose of the system.

    Beware though: as time passes, Unix founding principles seems to get forgotten, and some CLI tools manifest a lack of user experience design, diverging from the usual philosophy and making the life of system administrators difficult. I’ve often observed this on tools coming from recent languages - python, go, rust - where the “interface” of the tools is closer to the language it’s written with than the CLI uniform interface.