• WhatAmLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    5
    ·
    23 hours ago

    The results of this new GSM-Symbolic paper aren’t completely new in the world of AI researchOther recent papers have similarly suggested that LLMs don’t actually perform formal reasoning and instead mimic it with probabilistic pattern-matching of the closest similar data seen in their vast training sets.

    WTF kind of reporting is this, though? None of this is recent or new at all, like in the slightest. I am shit at math, but have a high level understanding of statistical modeling concepts mostly as of a decade ago, and even I knew this. I recall a stats PHD describing models as “stochastic parrots”; nothing more than probabilistic mimicry. It was obviously no different the instant LLM’s came on the scene. If only tech journalists bothered to do a superficial amount of research, instead of being spoon fed spin from tech bros with a profit motive…

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 hours ago

      If only tech journalists bothered to do a superficial amount of research, instead of being spoon fed spin from tech bros with a profit motive…

      This is outrageous! I mean the pure gall of suggesting journalists should be something other than part of a human centipede!

    • jabathekek@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      17 hours ago

      describing models as “stochastic parrots”

      That is SUCH a good description.

    • no banana@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      2
      ·
      23 hours ago

      It’s written as if they literally expected AI to be self reasoning and not just a mirror of the bullshit that is put into it.

      • Sterile_Technique@lemmy.world
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        3
        ·
        22 hours ago

        Probably because that’s the common expectation due to calling it “AI”. We’re well past the point of putting the lid back on that can of worms, but we really should have saved that label for… y’know… intelligence, that’s artificial. People think we’ve made an early version of Halo’s Cortana or Star Trek’s Data, and not just a spellchecker on steroids.

        The day we make actual AI is going to be a really confusing one for humanity.

    • fluxion@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      23 hours ago

      Clearly this sort of reporting is not prevalent enough given how many people think we have actually come up with something new these last few years and aren’t just throwing shitloads of graphics cards and data at statistical models