• Mossy Feathers (She/They)@pawb.social
    link
    fedilink
    arrow-up
    2
    ·
    29 days ago

    But an AGI isn’t an LLM. That’s what’s confusing me about your statement. If anything I feel like I already covered that, so I’m not sure why you’re telling me this. There’s no reason why you can’t recreate the human brain on silicon, and eventually someone’s gonna do it. Maybe it’s one of our current companies, maybe it’s a future company. Who knows. Except that a true AGI would turn everything upside down and inside out.

    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      29 days ago

      I think, possibly, my tired brain at the time thought that you are implying LLM -> AGI. And I do agree that that’s no reason, beyond time and available technology that a model of a brain cannot be made. I would question whether digital computers are capable of accurately simulating neurons, at least, without requiring more components (more bits of resolution).

      For full disclosure, I am supportive of increasing the types of sentience in the known universe. Though, not at the expense of biosphere habitability. Whether electronic or biological, sharing the world with more types of sentients would make it a more interesting place.

      Except that a true AGI would turn everything upside down and inside out.

      Very likely. Especially if “human rights” aren’t pre-emptively extended to cover non-human sentients. But, the existence of AGI, alone, is not likely to cause either doomsday or save us from it, which seem to be the most popularly envisaged scenarios.

      • Mossy Feathers (She/They)@pawb.social
        link
        fedilink
        arrow-up
        1
        ·
        29 days ago

        I think, possibly, my tired brain at the time thought that you are implying LLM -> AGI.

        Ah, okay. I’ve been there lol. I hope I didn’t come off as confrontational, I was very confused and concerned that I had badly explained myself. My apologies if I did.