- cross-posted to:
- nev@lemmy.intai.tech
- technology@chat.maiion.com
- cross-posted to:
- nev@lemmy.intai.tech
- technology@chat.maiion.com
Users of OpenAI’s GPT-4 are complaining that the AI model is performing worse lately. Industry insiders say a redesign of GPT-4 could be to blame.
But the only thing it’s actually good at is generating languages, if they try and pretend to know stuff in fields, they’re quickly exposed as frauds.
Ah, yes, when I was a kid, I would try to read big texts I understood nothing of and imitate something similar. I thought it made me smarter.
In some sense it did - probabilities of certain words being connected in a certain way, if you make some connection between them and real entities, are useful.
I mean, it did work at school, just say some water without turning on your brain. I sometimes start talking like this when I panic after a question.
I cant express my diappointment with chatgpt, they let loose a bot that makes content farms shreek in joy but messes up basic things if their is no well treaded answer, wont give you non mainstream answers (you likely already know and watched what it tells you is “really obscure anime”) And jenuinely has no tolerance for error, from you or itself
I think the fact that they are sitting on that sweet, sweet first-to-market money consoles them somewhat.
It doesn’t even “know” language. Every time I see it write a poem it reads like something a 3rd grader would come up with. At the end of the day, language is way to explain your experience. An LLM doesn’t have experiences.
Its consistant tho! Most older ones would fluxuate down to the level of chatgpt somtimes