• rtxn@lemmy.world
    link
    fedilink
    arrow-up
    224
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Our business-critical internal software suite was written in Pascal as a temporary solution and has been unmaintained for almost 20 years. It transmits cleartext usernames and passwords as the URI components of GET requests. They also use a single decade-old Excel file to store vital statistics. A key part of the workflow involves an Excel file with a macro that processes an HTML document from the clipboard.

    I offered them a better solution, which was rejected because the downtime and the minimal training would be more costly than working around the current issues.

    • Tar_alcaran@lemmy.world
      link
      fedilink
      arrow-up
      79
      ·
      1 year ago

      The library I worked for as a teen used to process off-site reservations by writing them to a text file, which was automatically e-faxed to all locations every odd day.

      If you worked at not-the-main-location, you couldn’t do an off-site reservation, so on even days, you would print your list and fax it to the main site, who would re-enter it into the system.

      This was 2005. And yes, it broke every month with an odd number of days.

    • SSTF@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      downtime

      minimal retraining

      I feel your pain. Many good ideas that cause this are rejected. I have had ideas requiring one big downtime chunk rejected even though it reduces short but constant downtimes and mathematically the fix will pay for itself in a month easily.

      Then the minimal retraining is frustrating when work environments and coworkers still pretend computers are some crazy device they’ve never seen before.

      • tool@r.rosettast0ned.com
        link
        fedilink
        arrow-up
        16
        ·
        1 year ago

        Places like that never learn their lesson until The Event™ happens. At my last place, The Event™ was a derecho that knocked out power for a few days, and then when it came back on, the SAN was all kinds of fucked. On top of that, we didn’t have backups for everything because they didn’t want to pay for more storage. They were losing like $100K+ every hour they were down.

        The speed at which they approved all-new hardware inside a colocation facility after The Event™ was absolutely hilarious, I’d never seen anything approved that quickly.

        Trust me, they’re going to keep putting it off until you have your own version of The Event™, and they’ll deny that they ever disregarded the risk of it happening in the first place, even though you have years’ worth of emails saying “If we don’t do X, Y will occur.” And when when Y occurs, they’ll scream “Oh my God, Y has occurred, no one could have ever foreseen this!”

        It’ll happen. Wait and watch.

        • DigitalAudio@sopuli.xyz
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          Sounds like a universal experience for pretty much all fields of work.

          Government and policy? Climate change? A fucking pandemic?!

          We’ve seen it all happen time and time again. People in positions of authority get overconfident that if things are working right now, they’ll keep working indefinitely. And then despite being warned for decades, when things finally break, they’ll claim no one could have foreseen the consequences of their lack of responsibility. Some people will even chime in and begin theorising that surely, those that warned them, had to be responsible for all the chaos. It was an act of sabotage, and not of foresight.

        • SSTF@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 year ago

          Places I’m at usually end up bricking robots and causing tens of thousands of dollars of damage to them because they insist on running the robot without allowing small fixes.

          Usually a big robot crash will be The Event that teaches people to respect early warning signs…for about 3 months. Then the old attitude slides back.

          Good thing we aren’t building something that requires precision, like semi-conductor wafers. Oh wait.

          • Osnapitsjoey@lemmy.one
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That’s just be on them losing tons and tons of money from bad usable platter space lol they’re machine gunning themselves in the legs

    • bleistift2@feddit.de
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      cleartext usernames and passwords as the URI components of GET requests

      I’m not an infrastructure person. If the receiving web server doesn’t log the URI, and supposing the communication is encrypted with TLS, which removes the credentials from the URI, are there security concerns?

      • nudelbiotop@feddit.de
        link
        fedilink
        arrow-up
        20
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Anyone who has access to any involved network infrastructure can trace the cleartext communication and extract the credentials.

      • ItsMyFirstDay@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        I’m not 100% on this but I think GET requests are logged by default.

        POST requests, normally used for passwords, don’t get logged by default.

        BUT the Uri would get logged would get logged on both, so if the URI contained @username:Password then it’s likely all there in the logs

        • SzethFriendOfNimi@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Get and post requests are logged

          The difference is that the logged get requests will also include any query params

          GET /some/uri?user=Alpha&pass=bravo

          While a post request will have those same params sent as part of a form body request. Those aren’t logged and so it would look like this

          POST /some/uri

        • bleistift2@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          GET requests are logged

          That’s why I specified

          the receiving web server doesn’t log the URI

          in my question.

      • rtxn@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Nope, it’s bare-ass HTTP. The server software also connected to an LDAP server.

      • netvor@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I would still not sleep well; other things might log URI’s to different unprotected places. Depending on how the software works, this might be client, but also middleware or proxy…

      • Archer@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        supposing the communication is encrypted with TLS

        I can practically guarantee you it was not

      • nijave@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Browser history

        Even if the destination doesn’t log GET components, there could be corporate proxies that MITM that might log the URL. Corporate proxies usually present an internally trusted certificate to the client.

    • V4uban@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      3
      ·
      1 year ago

      As weird as it may seem, this might be a good argument in favor of Pascal. I despised learning it at uni, as it seems worthless, but is seems that it can still handle business-critical software for 20 years.

      • Overzeetop@lemmy.world
        link
        fedilink
        arrow-up
        24
        ·
        edit-2
        1 year ago

        What OP didn’t tell you is that, due to its age, it’s running on an unpatched WinXP SP2 install and patching, upgrading to SP3, or to any newer Windows OS will break the software calls that version of Pascal relies upon.

        • tool@r.rosettast0ned.com
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 year ago

          You’re literally describing the system that controlled employee keyscan badges a couple of jobs ago…

          That thing was fun to try and tie into the user disable/termination script that I wrote. I ended up having to just manipulate its DB tables manually in the script instead of going through an API that the software exposed, because it didn’t do that. Figuring out their fucked-up DB schema was an adventure on its own too.

          • Overzeetop@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            I’m also describing the machine in my office that runs my $20,000 laser plotter/large format scanner. The software in the machine uses (Java?) over a web interface which was deprecated and removed from all browsers around 2012-14, iirc. The machine isn’t supported anymore and the only way to clear an error or update where it sends scans is using that interface. I have a XPSP2 machine running the internal IE6 browser which will still display the interface. Since I’m now a one-person office, and I use the scanner about 6 times a year, I keep that machine around in case I need to turn it on to update the scanner or clear a print error. Buying a new plotter isn’t worth the time/money - when it dies I’ll just farm out the work to a 3rd party vendor; but while it does work it’s convenient to have in-house.

            • tool@r.rosettast0ned.com
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              If it’s that old, I’m betting it doesn’t use HTTPS for its connections. You could do a network packet capture on the XP machine (or if you can find one, hook it up to a network hub with another computer attached and capture there) while performing the “clear error” action and find out how it works/what you need to send to it to clear the error. You could also set up a SPAN port on a switch and mirror the traffic on the port going to the printer to capture the traffic, if you have a switch capable of doing that. If not, you can get one off Amazon for about $100.

              It’d be pretty simple to put together a script that sends the “clear error” action to the printer after seeing how it’s done in the packet capture. I’ve done this numerous times, the latest of which was for a network-connected temperature sensor that I wanted to tie into but didn’t (publicly) expose an API of any kind.

              • Overzeetop@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                It’s more than that, though - it’s used to setup custom sheet widths as well as enter new server and login details for sending scans via FTP to a server. If I’m doing billable work, I’m charging $225/hr. If I’m snooping the network, which isn’t my field and I do almost never so it takes me several times longer than an expert, I’m making nothing. With an annual value on the machine’s services at less than $500 (more than half of which would become reimbursable if I didn’t have it), there’s no actual value in “fixing” it by creating a different work around. 🤷‍♂️