• hex@programming.dev
    link
    fedilink
    English
    arrow-up
    60
    ·
    15 days ago

    Facts are not a data type for LLMs

    I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they’re just really good at predicting the next word.

    • CleoTheWizard@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      14 days ago

      They’re not good at predicting the next word, they’re good at predicting the next common word while excluding most unique choices.

      What results is essentially if you made a Venn diagram of human language and only ever used the center of it.

      • hex@programming.dev
        link
        fedilink
        English
        arrow-up
        14
        ·
        14 days ago

        Yes, thanks for clarifying what I meant! AI will never create anything unique unless prompted uniquely and even then it will tend to revert back to what you expect most.