No, because LLMs are just a mathematical blender with ONE goal in mind: construct a good sentence. They have no thoughts, they have no corrective motion, they just spit out sentences.
You MIGHT get to passing a Turing test with enough feedback tied in, but then the “conciousness” is specifically coming from the systemic complexity at that point and still very much not the LLMs.
So if I run enough different AI LLM models and let them communicate, I can create consciousness?
No, because LLMs are just a mathematical blender with ONE goal in mind: construct a good sentence. They have no thoughts, they have no corrective motion, they just spit out sentences.
You MIGHT get to passing a Turing test with enough feedback tied in, but then the “conciousness” is specifically coming from the systemic complexity at that point and still very much not the LLMs.
In my opinion you are giving way too much credit to human beings. We are mainly just machines that spit out sentences.
No, you are giving too much credit to LLMs. Thinking LLMs are capable of sentience is as stupid as thinking individual neurons could learn physics.
If you gave them all access to real world, realtime sensor data… Over an extended period of time…hmm