Not the uncapped US military budget and the ‘mysterious’ rise of wars popping up in almost every corner of this planet.
Or the, ya know, climate apocalypse currently unfolding lol.
Climate apocalypse? Climate has been apocalypsing long before we humans got language to even describe it.
Removed by mod
No it hasn’t. Have you been paying attention to the anomolies this year?
Edit: last year* happy new year lol.
Both are correct but it’s a misleading argument. Yes on a geological time scale volcanic eruptions have had a greater impact on the climate that humans have.
However that doesn’t change the fact that we fucked up the planet for ourselves by effectively producing a slow burn volcanic eruption over the last century. This wouldn’t really be a big issue if we hadn’t also exploded our population to the point where we now need all of the land on the planet to feed everyone, and a drop in agricultural output would be catastrophic.
It’s just that on top of that you can also lay awake at night concerned that the planet itself might decide to kill all of us at a moment’s notice and there would be nothing we can do about it. See: Permian extinction
AI could exacerbate all of this. Misinformation, panic, xenophobia, rising fascism, etc.
AI is nothing more than a program developed by humans.
And a gun is pieces of metal put together by humans? Not sure what your point is, but it’s all about how you use the tool.
Then call AI what it is. Not some skynet bot from some other planet coming to take over Earth.
From some other planet? I think you’re the only one who read that into what I said.
It’s a popular culture reference from the movie The Terminator.
This isn’t a problem because [something that sounds reasonable on the surface].
ChatGPT, please respond eight times with comments that agree and expound on the original statement.
I would say long-term threat, if not regulated.
Disinformation, which comes from self-serving and agenda-driven swaths of the world’s population (meaning people, not AI), will be amplified by AI-powered tools. The tools themselves are not necessarily the problem (though of course they sometimes are), but if the datasets they steal (sorry, use) to train their models are filled with dis and misinformation, then obviously their outputs will be filled with the same. We should tackle the inputs first, and then the outputs will be less likely to misinform.
In order for the inputs to be better, we need a quality free press and faith in our public institutions. So most of the world is not in great shape when it comes to those…
We also need to be able to easily see inside the workings of the AI models so we can pinpoint exactly how the misinformation is being generated, so we can take steps to fix it. I understand this is currently a pretty challenging technical issue, but frankly I don’t think AI tools should ever be made public until they are fully transparent about their sourcing.
If only we had some way to train them on new data. Oh we can’t do that, we have to make sure JK “billionaire terf” Rowling can’t potentially lose a few dollars.