• hpca01@programming.dev
      link
      fedilink
      English
      arrow-up
      67
      ·
      10 months ago

      It’s not fun, I got hacked through an archived git repo, for when I was learning to use AWS, following tutorials and whatnot.

      Forgot about it for years, then out of nowhere got hit for 27k…needless to say I said good luck collecting that shit.

      They waived it all granted I logged in and deleted all resources that were running as well as removed all identities. Sure as hell I did that and saw a ton of identities out in the middle of nowhere. Fucking hackers ran up a shit ton of AWS sagemaker resources trying to probably hack some dude’s wallet.

      Every time I see a tutorial on how to deploy x in AWS, I get pissed. The newbies need to learn about administration before they start deploying shit on cloud infra.

      • Max-P@lemmy.max-p.me
        link
        fedilink
        English
        arrow-up
        7
        ·
        10 months ago

        I especially hate that this culture now made its way into the corporate world too. It’s now normal and expected that a developer will just have to follow one of the AWS tutorials to get the thing going and leave it like that.

        Nobody thinks about how they’re going to compose their resources anymore, all the AWS “experts” just spit out their AWS training verbatim without any thoughts of their own.

        • partial_accumen@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 months ago

          Nobody thinks about how they’re going to compose their resources anymore, all the AWS “experts” just spit out their AWS training verbatim without any thoughts of their own.

          There are absolutely AWS experts that will give comprehensive answers and solutions, but many times those don’t get hired because there’s this other guy that’s cheaper and says we can “do it for a fraction of the first guy”.

          • Max-P@lemmy.max-p.me
            link
            fedilink
            English
            arrow-up
            4
            ·
            10 months ago

            Yeah they do exist, I just think they’re also usually not the ones that carry all the (mostly useless) certs. Those certs are designed to maximize profits for AWS, not to optimize for best bang for the buck. And the ones that do get the certs get them because they want to be hired and have little else to show. But companies treat those certs like they’re university degrees.

            You’re not going to get those certs by answering “Don’t use AWS Private CA, you can use OpenSSL in a Lambda to make them for free and save hundreds every month” or “Don’t use the AWS VPN because they charge per client connections and session duration, just set up a t4g.nano with WireGuard and it’s just as good and costs only a couple cents a month for a proper 24/7 always on VPN for the whole dev team”. The “correct” answer is obviously that using a managed service is always better.

            Even the AWS advisors they give you for free with your big enterprise contract are basically glorified salespeople for AWS.

            Are there good AWS experts out there? Absolutely! I’m just pointing out the industry heavily favors producing the wrong kind of expert. The good experts know their shit regardless of the cloud or what your servers run. And those get turned down because of salary or simply failing to answer some AWS trivia that would take 10 minutes to look up and understand.

        • nybble41@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          I’d settle for just the limits, personally.

          The part that makes me the most paranoid is the outbound data. They set every VM up with a 5 Gbps symmetric link, which is cool and all, but then you get charged based on how much data you send. When everything’s working properly that’s not an issue as the data size is predictable, but if something goes wrong you could end up with a huge bill before you even find out about the problem. My solution, for my own peace of mind, was to configure traffic shaping inside the VM to throttle the uplink to a more manageable speed and then set alarms which will automatically shut down the instance after observing sustained high traffic, either short-term or long-term. That’s still reliant on correct configuration, however, and consumes a decent chunk of the free-tier alarms. I’d prefer to be able to set hard spending limits for specific services like CPU time and network traffic and not have to worry about accidentally running up a bill.