over the time of chatgpt’s existence I’ve seen so many people hype it up like it’s the future and will change so much and after all this time it’s still just a chatbot
It’s actually insane that there are huge chunks of people expecting AGI anytime soon because of a CHATBOT. Just goes to show these people have 0 understanding of anything. AGI is more like 30+ years away minimum, Andrew Ng thinks 30-50 years. I would say 35-55 years.
At this rate, if people keep cheerfully piling into dead ends like LLMs and pretending they’re AI, we’ll never have AGI. The idea of throwing ever more compute at LLMs to create AGI is “expect nine women to make one baby in a month” levels of stupid.
But they’re also having to fight for more limited funding among a crowd of chatbot “researchers”. The funding agencies are enamored with LLMs right now.
I wouldn’t say LLMs are going away any time soon. 3 or 4 years ago I did the Sentdex youtube tutorial to build one from scratch to beat a flappy bird game. They are really impressive when you look at the underlying math. And the math isn’t precise enough to be reliable for anything more than entertainment. Claiming it’s AI, much less AGI is just marketing bullshit, tho.
I’m not sure what is these days but according to Merriam it’s the capability of computer systems or algorithms to imitate intelligent human behavior. So it’s debatable.
I don’t think it’s just marketing bullshit to think of LLMs as AI… The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.
That’s not like a crazy hype claim like the “AGI” thing, either… It doesn’t suggest sentience or consciousness or any particular semblance of life (and I’d disagree with MW that it needs to be “human” in any way)… It’s just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.
Tbh i think it’s a real possibility that OpenAI knows they can’t meet people’s expectations with GPT-5 , so they’re posting articles like this, and basically trying to throw out anything they can and see what sticks.
I think if GPT-5 doesn’t pan out, it’s time to accept that things have slowed down, and that the hype cycle is over. This very well could mean another AI winter
My two use cases are project brainstorming and boilerplate code, which saves a lot of time for me. For example sometimes I find an interesting paper and want to try it out in Python. If they did not provide code that will take some time and trial and error to get it running. Or I just copy the whole paper into ChatGPT and get an initial script that sometimes even works with it’s first try. But that is not the point, I can do the last steps myself, it really is a time saver for me with regards to programming.
It’s really useful for programming. It’s not always right but it has good approaches and you can ask it to write tedious parts of your code like long switch statements. Most of my programming problems were solved because I just explained the problem like Rubber Duck Debugging.
over the time of chatgpt’s existence I’ve seen so many people hype it up like it’s the future and will change so much and after all this time it’s still just a chatbot
Exactly lol, it’s basically just a better cleverbot
SmarterChild ‘24
It’s actually insane that there are huge chunks of people expecting AGI anytime soon because of a CHATBOT. Just goes to show these people have 0 understanding of anything. AGI is more like 30+ years away minimum, Andrew Ng thinks 30-50 years. I would say 35-55 years.
At this rate, if people keep cheerfully piling into dead ends like LLMs and pretending they’re AI, we’ll never have AGI. The idea of throwing ever more compute at LLMs to create AGI is “expect nine women to make one baby in a month” levels of stupid.
People who are pushing the boundaries are not making chat apps for gpt4.
They are privately continuing research, like they always were.
But they’re also having to fight for more limited funding among a crowd of chatbot “researchers”. The funding agencies are enamored with LLMs right now.
In my experience that’s not the case. These teams are not very public but are very well funded.
https://machinelearning.apple.com/research/massively-multimodal
Thanks, Buster. It’s reassuring to hear that.
I wouldn’t say LLMs are going away any time soon. 3 or 4 years ago I did the Sentdex youtube tutorial to build one from scratch to beat a flappy bird game. They are really impressive when you look at the underlying math. And the math isn’t precise enough to be reliable for anything more than entertainment. Claiming it’s AI, much less AGI is just marketing bullshit, tho.
You’re saying you think LLMs are not AI?
I’m not sure what is these days but according to Merriam it’s the capability of computer systems or algorithms to imitate intelligent human behavior. So it’s debatable.
I don’t think it’s just marketing bullshit to think of LLMs as AI… The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.
That’s not like a crazy hype claim like the “AGI” thing, either… It doesn’t suggest sentience or consciousness or any particular semblance of life (and I’d disagree with MW that it needs to be “human” in any way)… It’s just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.
I’m thinking 36-56 years
AGI coming tomorrow! (tomorrow never comes)
AGI is the new Nuclear Fusion. It will always be 30 years away.
All they had to do was make BonzaiBuddy link up with ChatGPT
Tbh i think it’s a real possibility that OpenAI knows they can’t meet people’s expectations with GPT-5 , so they’re posting articles like this, and basically trying to throw out anything they can and see what sticks.
I think if GPT-5 doesn’t pan out, it’s time to accept that things have slowed down, and that the hype cycle is over. This very well could mean another AI winter
We can only hope
Really? I use it constantly
For what? I have zero use for any AI products
My two use cases are project brainstorming and boilerplate code, which saves a lot of time for me. For example sometimes I find an interesting paper and want to try it out in Python. If they did not provide code that will take some time and trial and error to get it running. Or I just copy the whole paper into ChatGPT and get an initial script that sometimes even works with it’s first try. But that is not the point, I can do the last steps myself, it really is a time saver for me with regards to programming.
It’s really useful for programming. It’s not always right but it has good approaches and you can ask it to write tedious parts of your code like long switch statements. Most of my programming problems were solved because I just explained the problem like Rubber Duck Debugging.
I use it for programming questions.
immediate replies so I don’t have to switch tasks while praying for an answer
no suggestions that I just do the whole thing differently
infinite patience
Don’t forget the other benefits of using AI for programming:
It may make up shit that doesn’t exist or just give you wrong syntax
It will give you the same wrong answer repeatedly until you get irritated and it hangs up on you
Is way too goddamned excited while giving you shit answers until you run out of patience
I like using it for help, but goddamn do I want to throw my laptop out the window some days.
💯. Although sometimes I feel like berating the AI is more satisfying; it’s all his fault I haven’t solved this yet!