I think we already came to that conclusion ourselves, Tiktok made us aware i think…leading to terms like brainrot and slop.
But it’s good to see it is recognized.
Calculators made mental math obsolete. GPS apps made people forget how to navigate on their own.
Maybe those are good innovations or not. Arguments can be made both ways, I guess.
But if AI causes critical thinking skills to atrophy, I think it’s hard to argue that that’s a good thing for humanity. Maybe the end game is that AI achieves sentience and takes over the world, but is benevolent, and takes care of us like beloved pets (humans are AI’s best friend). Is that good? Idk
Or maybe this isn’t a real issue and the study is flawed, or more realistically, my interpretation of the study is wrong because I only read the headline of this article and not the study itself?
Who knows?
Calculators made mental math obsolete
People love to pretend that. But it’s still very important to have decent mental math as an assist to know when someone’s bullshiting you.
It’s developed by the worst of us and taught by a bunch of shit it read on reddit. You’re thinking it might be benevolent?
Do people cry when they step on ant?
Do rich people care when they kill little people?
Why would AI side with the ants or the little people lol
GPS apps made people forget how to navigate on their own
Not really, tho. It makes it easier to get to places you’re unfamiliar with easily. Maybe it makes the “becoming familiar” less important for the brain.
I perceive my advanced tools akin to a broom.
I can mop floors alright, but I also don’t want to sit down with a cloth to do it.
If I can’t do that myself, and it does that instead of me, that’s not just my tool, that’s my employee, and the one I now depend on.
‘AI’ companies sell us billions of hours of other people’s labor to replace our own need to interject our experience and ingrain themselves into our routine. Like the coming of ads, it’s already normalized. But this time, critical parts of our life has this black box dependancy and subscription.
So, AI users exhibit a reduction in literally the one skill that the AI expects them to actually have?
I should probably go read that link and see if it’s actual degradation or just selection.
Spoiler alert : it was just a survey of the reported confidence of folk who admitted to using AI.
Well, to be fair, I never had the idea of sticking pizza toppings with glue… That’s some next level Gordian Knot stuff, right there!
Thankfully the slop generated by copilot et al is absolutely useless dreck. I’ve had a significant number of tasks end up broken because someone chased a dream promised by Ai slop. “Sure, you can do that in python.” “that’s definitely how that tool works.” etc.
Also don’t forget Europe in the 18th century and how reading was destroying the youth. German Wikipedia has a big ass entry on it.
https://de.m.wikipedia.org/wiki/Lesesucht
The US Wikipedia entry is just a blurb.
Mule emoji
They said the same shit about writing, books, radio, and tv.
I think AI so far is detrimental to society.
-
It made it too easy to flood the world with bullshit.
-
Also it will make tracking peoples behaviors much easier while keeping plausible deniability on levels that past horrible regimes could only dream about.
-
It will be used to make replacing workers more easy.
-
It is being used to deny more healthcare (eg Luigie’s case)
Pro’s
-
Can be used for good (eg in the medical field) by finding issues sooner and making better cures
-
Using AI to actually learn though is a great tool.
-
Other scientific advancements
—
All in all I think it with social media is one of the biggest reasons the US is in the state it is.
So the pros are
- thing we have no way of knowing how it works, therefore no way of relying on it
- thing that helps you do something you then have to do anyway by yourself (if you want to learn something from generative model output you still need to fact check it)
- vague promise it will lead to anything useful in the future
You don’t always have to know how it works to rely on it. Most people could not tell you how a computer works but they are able to do better work.
We can verify that it’s better in some tasks than people. E.g give doctors and the AI 1000 MRI scans of potential cancer patients and it can determine it more accurate than doctors. So there they already are a help.
It’s already used in advancing different fields, for example reading texts of ancient burned scrolls without opening the scroll since that would break them.
But also medicine creation etc.
And with learning, yeah like books, they help you to learn faster but are not a requirement. Same here I can learn much faster now. But I will verify what it tells me.
— But I’m not sure if all of that outweighs the shit. 💩 The genie is already out of the bottle, no putting it back.
You don’t always have to know how it works to rely on it. Most people could not tell you how a computer works but they are able to do better work.
We can verify that it’s better in some tasks than people. E.g give doctors and the AI 1000 MRI scans of potential cancer patients and it can determine it more accurate than doctors. So there they already are a help.
I’d really prefer if my doctor knew why they say I have cancer!
That would be nice but as the “proud owner of medical issues” it’s much more often: “You have this, we don’t know why you have it, this is how we can manage it”.
You still want your doctor to be knowledgeable of course, but you also want them to use the best tools at their disposal. Most of them probably couldn’t tell you how an mri machine works exactly either.
-
About writing?! Surely you have zero evidence for that.
Oh haha cause it wouldn’t be recorded
yeah and it does harm. Any technology amputated a part of us. The point is deciding if it’s worth the cost.