- cross-posted to:
- nev@lemmy.intai.tech
- technology@chat.maiion.com
- cross-posted to:
- nev@lemmy.intai.tech
- technology@chat.maiion.com
Users of OpenAI’s GPT-4 are complaining that the AI model is performing worse lately. Industry insiders say a redesign of GPT-4 could be to blame.
The model has become inbred because it’s now impossible to scrape the web without AI content getting ingested, which is full of “hallucinations” and other weird artifacts. The last opportunity to get “uncontaminated” training data was sometime in mid 2022.
Not to say that it’s causing this particular problem, but this issue will emerge eventually. Garbage in = garbage out. Eventually GPT-19 will grow a mighty Habsburg chin.
Maybe not yet, but…
We’te getting there, hopefully.
Scrapped?.. Or scraped?
absolutely scraped, fixed
Also
We'te
, which I believe is a Klingon name.…is Facebook popular with Polish people? Or was this a weird polish joke I don’t get?
Very. Twitter never took off among general population (only politicians, journalists, botfarms and people who troll politicians and journalists), tiktok is for kids, Instagram is popular but again, rather among influencers and people who need to show off pictures not as a default SM app. I don’t really know where did Americans and west Europeans move from Facebook.
Removed by mod
All the articles with very specific titles, but then incredibly generic content, piss me off to no end.
Part of the reason why debugging windows is such a pain. Another part is the so called experts in the forums.
Make sure your drivers are up to date! Another job well done.
Also the articles that are plagiarized but run through a thesaurus bot to bypass search engine penalties for being plagiarized, often to the point of incomprehensibility. Yes, I’d love to read an article about my favorite vagabondlike, Deceased Cells.
deleted by creator
Nah GPT makes it a lot easier, it’s the thing it’s actually good at.
Before they were autogenerated with bad English, GPT can generate good English that is equally devoid of content
That hasn’t happened yet. Most likely they quantized GPT-4 more. It’s still based on the same training data.
I suspect future models are going to have to put some more focus on learning using techniques more like what humans use, and on cognition.
Like, compared to a human these language models need very large quantities of text input. When humans are first learning language they get lots of visual input along with language input, and can test their understanding with trial-and-error feedback from other intelligent actors. I wonder if perhaps those factors greatly increase the rate at which understanding develops.
Also, humans tend to cogitate on inputs while ingesting them during learning. So if the information in new inputs disagrees with current understanding, those inputs are less likely to affect current understanding (there’s a whole ‘how to change your mind’ thing here that is necessary for people to use, but if we’re training a model on curated data that’s probably less important for early model training).
I don’t know details of how model training works, but it would be interesting to know if anyone is using a progressive learning technique where the model that is being trained is used to judge new training data before it is used as a training input to update the model’s weights. That would be kind of like how children learn by starting with very simple words and syntax and building up conceptual understanding gradually. I’d assume so, since it’s an obvious idea, but I haven’t heard about it.
For fun I asked ChatGPT about that progressive learning approach, and it seems to like the idea.
I wish I had more time to undertake some experiments in model training, this seems like it would be a really fun research direction.
Sorry for the ‘wall of AI text’: