Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • Nomecks@lemmy.ca
    link
    fedilink
    arrow-up
    13
    arrow-down
    3
    ·
    2 months ago

    I think the real differentiation is understanding. AI still has no understanding of the concepts it knows. If I show a human a few dogs they will likely be able to pick out any other dog with 100% accuracy after understanding what a dog is. With AI it’s still just stasticial models that can easily be fooled.

    • amorpheus@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      edit-2
      2 months ago

      It’s certainly progressing. I was shopping for bunk beds recently and one listing was missing a measurement in the diagram. So I put a red line in and asked ChatGPT for the dimension, just giving it the photo and asking how long the red line is. Not only did it take the existing measurements from the photo and applied the necessary trigonometry to calculate what I wanted, it also correctly identified it as a bunk bed, and that there is a slide attached to it - I was looking for how far the slide will stick out into the room.

    • stupidcasey@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      2 months ago

      This is entirely presumptive, we simply do not and cannot know how much they understand, this all boils down to if it looks like a duck and quacks like a duck is it a duck?