- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
Earlier this year, WIRED asked AI detection startup Pangram Labs to analyze Medium. It took a sampling of 274,466 recent posts over a six week period and estimated that over 47 percent were likely AI-generated. “This is a couple orders of magnitude more than what I see on the rest of the internet,” says Pangram CEO Max Spero. (The company’s analysis of one day of global news sites this summer found 7 percent as likely AI-generated.)
I just had one of these! Literally each image was AI generated and everything real like it was from openai. It was a Google search for something like “kubernetes custom deployment rules” and it was a result that was like “kubelat.medium.com” or something. They just take the most asked questions and generate entire articles about them.
I just went to the source and asked chatGpt directly. I got a better answer anyway
the first person who develops a browser that effectively filters out AI results is going to do very well
How well does the “AI detection startup’s” product work? This is a big unsolved problem but I’d be hecka skeptical.
That is why I liked the comparison with articles from 2018. Then you have comparable texts in the same format and can more easily figure out differences in your analysis.
If true, a jump from 3% to 40% is significant to say the least.@Black616Angel numbers in the article are 7% for the pre-2018 corpus, and 47% for the post-2018 corpus. That is from less than 1 in 10 to almost 1 in 2, or a coin toss…
In 2018, 3.4 percent were estimated as likely AI-generated.
For 2024, with a sampling of 473 articles published this year, it suspected that just over 40 percent were likely AI-generated.
My numbers were from the Originality AI part.
@Black616Angel yes, I’ve realized that and corrected my post while you responded 😉
Maybe the blurb was AI-generated? 🤦🏻♂️
It doesn’t, and never will
That’s because of bots like you. (I kid to make a point.)
That’s exactly what a bot would say, to stay undetected.
It was an SEO hellhole from the start, so this isn’t surprising.
Do Forbes next!
It’s not so much that it’s AI generated … it’s also AI influenced.
I know so many professional office workers who once wrote some of the most boring sometimes stupid emails because they didn’t know how to write or get their message across or constantly miscommunicated things because they worded things wrong … now all of a sudden they’ve become professional writers and all their emails look like auto generated messages.
I’m guessing that many writers also take the AI shortcut. They get a bunch of content generated from an AI than just rewrite it for themselves. Some content i see is lazily edited and some is heavily. But I get the feeling that just about everyone is using it because it’s an easy way to get a bunch of work done without having to think too much.
At work? Yeah I’m gonna use AI to write that email. I didn’t think or do anything more than the minimum required before, I’m not starting now. AI just makes it so that the same garbage I would sent before, now smells nice.
If you like writing as an art. Why would you have the machine do that for you? If you like thinking, you can do the thinking and let the machine do the typing for you.
All of these are different uses.
The implication that rewriting GPT output makes one a professional writer … not sure we’re on the same page there. If you know how to use it for those results, great!
Omg the amount of times I’ve clicked on a Medium article in the last month and immediately knew it was AI is so frustrating!!! They aren’t even helpful articles because you can tell there is no real understanding.
Shitty tech opinions were flooding Medium before, so it’s not much of a difference.
I think the difference is scale. Before it was x% of humanity making shitting opinions where x < 100. Now it’s x% of humanity+AI, where x is, say, 100,000% of humanity. I don’t think we’re currently equipped to separate the wheat from that much chaff.
I knew it would be the first platform to go. The same goes for substack, thats next.
Perhaps, but I don’t read anything on Substack unless I’m subscribed. Reputation is the entire point on Substack, without it, the content will get no traffic.
The best part about this, is that new models will be trained on the garbage from old models and eventually LLMs will just collapse into garbage factories. We’ll need filter mechanisms, just like in a Neal Stephenson book.
People learn and write program code with the help of AI. Let this sink in for a moment.
I’m in university and I’m hearing this more and more. I keep trying to guide folks away from it, but I also understand the appeal because an LLM can analyze the code in seconds and there’s no judgements made.
It’s not a good tool to rely on, but I’m hearing more and more people rely on it as I progress.
The true final exam would be writing code on an airgapped system.
I’m going into my midterm in 30 minutes where we will be desicrating the corpses of trees.