IT DIDN’T TAKE long. Just months after OpenAI’s ChatGPT chatbot upended the startup economy, cybercriminals and hackers are claiming to have created their own versions of the text-generating technology. The systems could, theoretically at least, supercharge criminals’ ability to write malware or phishing emails that trick people into handing over their login information.

  • HTTP_404_NotFound@lemmyonline.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 年前

    I don’t see this as a bad thing.

    Malware that breaks due to bugs any normal sane developer would have detected.

    My experience with chatGPT, it’s a great TOOL. But, the code it generates, is very frequently incorrect. But, the problem is, the code it generates LOOKS good. And, will actually likely work, mostly.

    • mobyduck648@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 年前

      That’s fundamentally why you can’t replace a software engineer with ChatGPT, only a software engineer has the skillset to verify the code isn’t shit even if it superficially works.

      • HTTP_404_NotFound@lemmyonline.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 年前

        Yup.

        I find it can be quite a useful tool. But, I also know when to spot its mistakes. I had it generate and cleanup some code the other day, and found 4 or 5 pretty big issues with it, which would have been hardly detectable by a more novice developer.

        After, telling it about its own issues, it was able to identify and correct them.

        Its, kind of like mentoring a new developer.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 年前

        I’m not sure being used for a stated purpose (like generating code) in a way that you just don’t agree with counts as a “vulnerability”, though. Same thing as me using a drill to put a hole in a person; that’s not a malfunction, I’m just an asshole.

        We’re talking about making an AI which can’t be misused at this point, and of course that’s a famously hard problem, especially when we don’t really understand how the basic technology works.