• umbrella@lemmy.ml
    link
    fedilink
    arrow-up
    33
    ·
    11 days ago

    this week i sudo shutdown now our main service right at the end of the workday because i tought it was a local terminal.

    not a bright move.

    • SavvyWolf@pawb.social
      link
      fedilink
      English
      arrow-up
      38
      ·
      11 days ago

      There’s a package called molly-guard which will check to see if you are connected via ssh when you try to shut it down. If you are, it will ask you for the hostname of the system to make sure you’re shutting down the right one.

      Very usefull program to just throw onto servers.

    • mox@lemmy.sdf.org
      link
      fedilink
      arrow-up
      8
      ·
      11 days ago

      Oops.

      Since you’re using sudo, I suggest setting different passwords on production, remote, and personal systems. That way, you’ll get a password error before a tired/distracted command executes in the wrong terminal.

      • umbrella@lemmy.ml
        link
        fedilink
        arrow-up
        17
        ·
        edit-2
        11 days ago

        i have different passwords but i type them so naturally it didnt even register.

        “wrong password.”

        “oh, i’m on the server, here’s the right password:”

        “no wait”

    • naeap@sopuli.xyz
      link
      fedilink
      arrow-up
      7
      ·
      11 days ago

      Happens to everyone

      Just having a multitude of terminals open with a mix of test environment and (just for comparison) an open connection to the production servers…

      We were at a fair/exhibition once and on the first day people working on an actual customer project asked us, if they could compare with our code.
      Obviously they flashed the wrong PLC and we were stuck dead at the first hours of the exhibition.
      I still think that this place was cursed, as we also had to do multiple re-soldering of some connections of our robot and the sherry on top was the system flash dying - where I had fucked up, because I just finished everything late at night and didn’t made a complete backup of everything.
      But it seems, if luck runs out, you lose on all fronts.

      At least I was able to restore everything in 20mins. Which must be some kind of record.
      But I was shaking so much from the stress, that I couldn’t efficiently type anymore and was lucky to have a colleague to just calmly enter what I told him to and with that we’re able to get the show case up and running again.

      Well, at least the beer afterwards tasted like the liquid of the gods

    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 days ago

      I was making after hours config changes on a pair of mostly-but-not-entirely redundant Cisco L3 switches which basically controlled the entire network at that location. While updating the running configs I mixed up which ssh session was which switch and accidentally gave both switches the same IP address, and before I noticed the error I copied the running config to the startup config.

      Due to other limitations and the fact that these changes were to fix DNS issues (and therefore I couldn’t rely on DNS to save me) I ended up keeping sshing in by IP until I got the right switch and trying to make the change before my session died due to dropped packets from the mucked up network situation I had created. That easily added a couple of hours of cleanup to the maintainence I was doing

    • LiveLM@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 days ago

      Best thing I did was change my shell prompt so I can easily tell when it isn’t my machine