And by super intelligence we mean connected to a lot of things and able to wreak significant havoc, but absolutely fucking worthless for complex thought.
#Skynub
“scientist”
And what’s the power consumption rates on these super AIs? Can we afford that with our power grids? Our current situation with existing AI is already getting murky.
Humanity: “Super AI, what’s the best way to slow climate change?” Super AI: “Turn me off.”
Don’t worry, OpenAI has made a deal with a company that’s making Fusion power also heavily connected to Sam Altman that hasn’t actually done anything exceptional yet. So when they finally crack Fusion then AI can just be powered by that!
Sounds like me trying to plan my Dyson Sphere at the beginning of a Stellaris game.
Been trying out the demo for a game called Airborne Empire. You should give it a go, you’ll have more moments like this.
Right around the time of the linux desktop then.
The year of the Linux desktop is already here, just not the way the geeks hoped. Most people do their everyday computing on phones now, most phones run on a Linux kernel.
Windows 11 comes with WSL, and the entire OS is mostly a front-end for Microsoft’s cloud services now, which run on Linux.…none of those are desktops?
I mean, theoretically, yes.
I mean, it could be that some guy in his basement has been working on it in total secrecy and it shows up tomorrow.
But my guess is that the likely timeline is further out than either.
I seriously doubt that what we’re going to see is a single “Eureka” moment that gives us both AGI and manages to greatly surpass humans.
I would expect to see a more-incremental process, where publicly-visible systems get closer and closer to approaching that. And what OpenAI and friends are doing isn’t close. It’s cool, is useful for a lot of things, but isn’t a generalized system for solving problems.
Exactly. If you look at timelines for significant human achievement, you’ll think innovation comes in waves. But if you zoom in a bit, it’s really a bunch of ripples leading to pretty steady innovation.
For example, EVs exploded with Tesla, but they’ve been around for decades, they just didn’t catch on. The innovation to get there was steady, but the adoption was quick once a viable product was available and marketed well.
The same is true for AI. I learned about generative AI in college over a decade ago, and the source material was also old (IIRC the old lisp machines were supposed to be used for AI). It exploded because it got just good enough to be viable, and it was marketed well. The actual innovation was quite gradual.
This is the best summary I could come up with:
We may not have reached artificial general intelligence (AGI) yet, but as one of the leading experts in the theoretical field claims, it may get here sooner rather than later.
During his closing remarks at this year’s Beneficial AGI Summit in Panama, computer scientist and haberdashery enthusiast Ben Goertzel said that although people most likely won’t build human-level or superhuman AI until 2029 or 2030, there’s a chance it could happen as soon as 2027.
After that, the SingularityNET founder said, AGI could then evolve rapidly into artificial superintelligence (ASI), which he defines as an AI with all the combined knowledge of human civilization.
Last fall, for instance, Google DeepMind co-founder Shane Legg reiterated his more than decade-old prediction that there’s a 50/50 chance that humans invent AGI by the year 2028.
Best known as the creator of Sophia the humanoid robot, Goertzel has long theorized about the date of the so-called “singularity,” or the point at which AI reaches human-level intelligence and subsequently surpasses it.
Then there’s the assumption that the evolution of the technology would continue down a linear pathway as if in a vacuum from the rest of human society and the harms we bring to the planet.
The original article contains 524 words, the summary contains 201 words. Saved 62%. I’m a bot and I’m open source!