Maybe future versions of AI chatbots could use something like this as a shared persistent memory that all chatbot instances could reference as a common ground truth. The only trick would be getting the system to use sound epistemology and reliably report uncertainty instead of hallucinations.
This will fix all problems with AI if only we fix the fundamental flaw in the architecture guys!
I keep seeing this sort of thinking on /r/singularity, people who are sure LLMs will be great once they have memory/ground-truth factual knowledge/some other feature that in fact the promptfarmers have already tried (and failed) to add via fancier prompting (i.e. RAG) or fine-tuning and would require a massive reinvention of the entire paradigm to actually fix. That, or they describe what basically amounts to a reinvention of the concept of expert systems like Cyc.
also lmao @ one of the comments꧇
This will fix all problems with AI if only we fix the fundamental flaw in the architecture guys!
I keep seeing this sort of thinking on /r/singularity, people who are sure LLMs will be great once they have memory/ground-truth factual knowledge/some other feature that in fact the promptfarmers have already tried (and failed) to add via fancier prompting (i.e. RAG) or fine-tuning and would require a massive reinvention of the entire paradigm to actually fix. That, or they describe what basically amounts to a reinvention of the concept of expert systems like Cyc.