Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.

Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    8 months ago

    I have never trusted AI. One of the big problems is that the large language models will straight up lie to you. If you have to take the time to double check everything they tell you, then why bother using the AI in the first place?

    If you use AI to generate code, often times it will be buggy and sometimes not even work at all. There is also the issue of whether or not it just spat out a piece of copyrighted code that could get you in trouble if you use it in something.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      8 months ago

      I’m using Github Copilot every day just fine. It’s great for fleshing out boilerplate and other tedious things where I’d rather spend the time working out the logic instead of syntax. If you actually know how to program and don’t treat it as if it can do it all for you, it’s actually a pretty great time saver. An autocomplete on steroids basically. It integrates right into my IDE and actually types out code WITH me at the same time, like someone is sitting right beside you on a second keyboard.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      27
      ·
      edit-2
      8 months ago

      One of the big problems is that the large language models will straight up lie to you.

      Um… that’s a trait AI shares with humans.

      If you have to take the time to double check everything they tell you, then why bother using the AI in the first place?

      You have to double check human work too. So, since you are going to double check everything anyway, it doesn’t really matter if it’s wrong?

      If you use AI to generate code, often times it will be buggy

      … again, exactly the same as a human. Difference is the LLM writes buggy code really fast.

      Assuming you have good testing processes in place, and you better have those, AI generated code is perfectly safe. In fact it’s a lot easier to find bugs in code that you didn’t write yourself.

      There is also the issue of whether or not it just spat out a piece of copyrighted code that could get you in trouble

      Um - no - that’s not how copyright works. You’re thinking of patents. But human written code has the same problem.