• Steve@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    2 days ago

    I’m quite sure a part of my last job loss was due to my open refusal to use AI, which extended to my criticism of the use of AI-generated code being added to a codebase I was expected to manage and maintain. Being careful is seen as friction now, careful is translated as slow, no matter how much time is spent fixing things broken by shitty generated code, fixing is seen as productivity more than careful production is.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      slow is smooth, smooth is fast

      the most productive way to do things is to do it deliberately and with good planning, at least in my field which is not coding related in any way

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        the most productive way to do things is to do it deliberately and with good planning

        Two things which coding is currently allergic to, as the rise of vibe coding has demonstrated

        • fullsquare@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          get yourself into a career where not doing things carefully makes them either to stop working or generating accidents, it’ll usually stop managerial assholes from forcing you to do things wrong way

  • HedyL@awful.systems
    link
    fedilink
    English
    arrow-up
    37
    ·
    3 days ago

    Refusing to use AI tools or output. Sabotage!

    Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).

    I work in the field of law/accounting/compliance, btw.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      3 days ago

      Maybe it’s also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it’s expected to try and try again with different questions until one correct answer comes out and then use that one to “evangelize” about the virtues of AI.

      • Slatlun@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        This is how I tested too. It failed. Why would I believe it on anything else?

    • ulterno@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Even better to do the correspondence in written.
      That way you can show others (and hope that someone cares) what you rejected and what they were trying to push.

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 days ago

        This may only be a problem if the people in charge don’t understand why it’s wrong. “But it sounds correct!” etc.

        • ulterno@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 days ago

          Not a problem.

          If it manages to stay in history, hopefully someone after the next dark ages will read it and give you vindication.

    • tazeycrazy@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      You can definately ask ai for more jargon and add information about irrelevant details to make it practically unreadable. Pass this through the llm to add more vocabulary, deep fry it and sent it to management.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    I can’t help but feel like no matter how well-intentioned the actual recommendations (i.e. listen to your people when they tell you the AI is shit) this headline is going to be used to justify canning anyone who isn’t sufficiently on board with wherever the C-suite wants to go. Even the generous (read: accurate) example of the historical luddites could be used to tar people as saboteurs and enemies of progress, which would give a callous executive license to do the things they want to do anyways to try and increase profits.

    This bubble can’t pop soon enough, before anyone is truly reliant on the base LLMs operated directly by OpenAI and other bottomless money pits.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 days ago

    CIO even ends with talking up the Luddites — and how they smashed all those machines in rational self-defence.

    I genuinely thought this wasn’t true at first and went to check. Its completely true, a fucking business magazine’s giving the Luddites their due:

    Regardless of the fallout, fractional CMO Lars Nyman sees AI sabotage efforts as nothing new.

    “This is luddite history revisited. In 1811, the Luddites smashed textile machines to keep their jobs. Today, it’s Slack sabotage and whispered prompt jailbreaking, etc. Human nature hasn’t changed, but the tools have,” Nyman says. “If your company tells people they’re your greatest asset and then replaces them with an LLM, well, don’t be shocked when they pull the plug or feed the model garbage data. If the AI transformation rollout comes with a whiff of callous ‘adapt or die’ arrogance from the C-suite, there will be rebellion.”

    It may be in the context of warning capital not to anger labour too much, lest they inspire resistance, but its still wild to see.

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 days ago

      reviewing his history, i don’t think this article was actually written by the sort of commie who ends up at finance papers then belatedly remembered to tone it down a bit, but you’d be forgiven for thinking so

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 days ago

    It is not sabotage if they are helping you not commit to a long term strategy that is detrimental to the company.

    • marcos@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      Well, it is. It’s not sabotaging the company, but it’s absolutely sabotaging the initiative. (You can absolutely sabotage saboteurs.)

      That said, it’s not sabotage because no action of those is actually sabotage. It’s just people telling their managers AI is bad on the job, or failing to make it good.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 days ago

    The Pivot to AI article says 31%, but the source says 41%. I think Pivot to AI just accidentally a digit a little bit.

    Edit: oh, 31% of employees, but 41% of Millennial and Gen-Z.