Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
"For those in writing related jobs, they may find lucrative work cleaning up attempts to sidestep them with AI slop, squeezing hefty premiums from desperate clients who find themselves lacking leverage over them.
Well, seems I was pretty close - NBC news recently reported humans are being hired to clean up AI slop. My prediction it would be lucrative was off the mark, though - artists called in for de-slopping work are getting paid less than if they were simply hired to create the work themselves. Clearly, I was being overly optimistic.
You want my take, anyone who gets hired for slop cleanup should try to squeeze as much cash out their clients as much as possible - they showed open contempt for humanity by choosing a clanker, they need to be shown the consequences.
Me, 17 days ago
this really isn’t a hard guess
You want my take
probably not
anyone who gets hired for slop cleanup should try to squeeze as much cash out their clients as much as possible
“people should try to get paid well”? that’s your whole take? really? you thought this was worthwhile posting? not, maybe, spitballing ideas for how people should get paid well? some advice on how to negotiate with clients who are quite likely to be pennypinching types (evidenced by them trying to get as much as possible for free)? none of that, just more fluff? okay then
So I learned about the rise of pro-Clippy sentiment in the wake of ChatGPT and that led me on a little ramble about the ELIZA effect vs. the exercise of empathy https://awful.systems/post/5495333
Not worthy of a third post imho, but Scott made the trilogy post: https://scottaaronson.blog/?p=9108 where he is subtlety walking back his insane, proudly kid killing claim to the more reasonable, Israel needs to defeat hamas, claim.
For a man angry about his detractors being intellectually dishonest(**), this is very typical:
Incredibly, commenters on Peter Woit’s blog then blamed me for this antisemitic image, mistakenly imagining that I’d created it myself, and then used their false assumption as further proof of my mental illness
For context, it was an AI slop image, and he named it after Woit, and didn’t provide any details from the email itself. While the whole affair is quite a good reason to at least put this persons name/email out there imho.
But also:
I’m even grateful, in a way, to SneerClub, and to Woit and his minions. I’m grateful to them for so dramatically confirming that I’m not delusional: some portion of the world really is out to get me. I probably overestimated their power, but not their malevolence. […]
His ability to lack self-awareness and reading skills remains bad. The power thing was partially in his first blogpost already, where he realized the ‘manhaters’ were just a small group. And the latter makes no sense unless you hold the random cartoon guy (who nobody here or at Woit expressed support for) as the consensus, and actual sc people, or fans of Peter.
The last part is just nuts, whole bit of Rationalization why he is actually justified, and will change the world. Not realizing everyone was horrified because he created a thought experiment to justify a genocide:
Reading the SneerClubbers’ armchair diagnoses of my severe mental illness, paranoia, persecution complex, grandiosity, etc. etc. I had the following thought, paraphrasing Shaw:
Yes, they’re absolutely right that psychologically well-adjusted people generally do figure out how to adapt themselves to the reigning morality of their social environment—as indicated by the Asch conformity test, the Milgram electric-shock experiment, and the other classics of social psychology.
It takes someone psychologically troubled, in one way or another, to persist in trying to adapt the reigning morality of their social environment to themselves.
If so, however, this suggests that all the moral progress of humanity depends on psychologically troubled people—a realization for which I’m deeply grateful.
this bit opens up so much questions, remarks and is just silly in some ways. Yes people who are dissatisfied will push for change, why is this a revelation? But do they have the power? Is it justified? Jared Taylor(*) also pushes for a changed morality system, but I wouldn’t consider that desirable. It also leaves out that people will push for change because they just want to profit from it. Which is likely a lot bigger driver of change see the libertarian ls pushing for less regulations, because the dumb and distracted deserve to be scammed.
Anyway we got called out twice!
*: a white nationalist piece of shit who from what I heard is notable because compared to his peers he isnt a raging anti-Semite. At least not openly. **: Edit: as nobody mentioned the whole ‘Peter is intellectually dishonest’ affair, I have to say one thing. Peter is imho not intellectually dishonest for leaving out that the people who tie their children on the traintrack could stop at any time. The people who don’t mention this just don’t think it is a convincing argument. Yes people who take hostages could release the hostages at any time, still no reason to shoot through the hostages. Derail the trolley! The way Russia dealt with the Opera hostage crisis was considered bad for a reason (I hope I don’t have to point out that the hostage takers also were in the wrong here). Forgot who it was who mentioned it here, but the whole ‘I would be glad to not flip the switch’ is one of the fucked up parts, it would scar people for life to make that choice.
Another edit: While thinking I realized the whole disgusting email thing is also partially due to the asshole filter effect of Scott closing his comments (which was smart tbh, he just should have told the people emailing him to fuck off): https://siderea.dreamwidth.org/1209794.html. Left a hopefully thoughtful comment (I looked at it just from the comments closing aspect and not with the further conflict in mind, as I have not begun to think about that, and don’t want to think about it in context of what Scott wrote in his first blog post) about this on Peters blog. Not sure if he will let it through the moderation (and it is fine if he doesn’t, and I doubt he will as he seemingly just closed the comments and went on vacation, have a good one Peter, apologies that your blog became a battleground over this). But I thought the concept was important enough as an idea to share, and also explains why you get the more shitty people to try and react when you go ‘please don’t react to here’. A thing we should also be aware of how the drive by commenters (which was worse on reddit) tend to be the sediment of the crop.
Shamelessly reproduced from the other place:
A quick summary of his last three posts:
“Here’s a thought experiment I came up with to try to justify the murder of tens of thousands of children.”
“Lots of people got mad at me for my last post; have you considered that being mad at me makes me the victim and you a Nazi?”
“I’m actually winning so much right now: it’s very normal that people keep worriedly speculating that I’ve suffered some sort of mental breakdown.”
Gonna repost a bit from comment I left on reddit:
Read swlabrs reply first btw.
Anyway, something useful perhaps, the
World Central Kitchenseems to help out with the famine so donating to them might be useful, see comment below. Donate to the UNRWA organisation instead. Not comfortable donating on a link provided by a random user (smart!), or want something to show for your donation, the current Humble book bundle is donating to the WCK (do not forget to adjust the sliders) and you get a nice collection of Martha Wells books (The Murderbot saga is great). Or buy the play for peace bundle which is donating to the UNRWA USA, and get a shitton of games and other stuff. (All links are affiliate free from my end, I know there is a system setup for Humble stuff, but I don’t use that).E: buying the last one is also a fun way to boost yourself into Spiders Georg levels of video game ownership. So you could do it for that joke alone. Just to claim you own ~300 videogames.
I’ve followed Jose Andres on insta for about 6 years now, and while I’d love to only have nice things to say about him and WCK, I know that there is resentment/resistance to the actions of WCK, for good reason. This is the first thing that pops up, a page detailing how WCK works with the IDF and how that is not in the best interests of Palestine. What’s also really gross, ghoulish and troublesome is that, despite the fact that Israel has literally killed WCK volunteers, Andres still works with them, and is largely pro-israel. That being said, everything about this is fucked anyway and I imagine that if you’re donating, your heart is in the right place. But, uh, yeah, adjust those sliders.
E: ok I have fully read the page. Fuck Andres, fuck WCK, please do not donate to them. Donate to UNRWA instead, or any organisation that is actually willing to call what Israel is doing a genocide.
Thanks for the info, will link this comment on reddit as well. Edited both posts, and yeah I just knew of both book/game projects and was going via that, used the humble link first that is why WCK came out on top and not UNRWA.
I’m even grateful, in a way, to SneerClub, and to Woit and his minions. I’m grateful to them for so dramatically confirming that I’m not delusional: some portion of the world really is out to get me. I probably overestimated their power, but not their malevolence. […]
Honestly what he should actually be grateful for is how all his notoriety ever amounted to[1] was a couple of obscure forums going ‘look at this dumb asshole’ and moving on.
He is an insecure and toxic serial overreactor with shit opinions and a huge unpopular-young-nerd chip on his shoulder, and who comes off as being one mildly concerted troll effort away from a psych ward at all times. And probably not even that, judging from Graham Linehan’s life trajectory.
[1] besides Siskind using him to broaden his influence on incels and gamer gaters.
Unmonitored RSD is a real sonuvabitch
While this is close to ‘look at what you made me do territory’. We would sneer a lot less at him if he didn’t blame us for everything. ‘people who sneer made covid worse’ (not the direct quote) for example was just silly, and if you look at the reaction of sneerclub at the time also not in the realm of reality. (But yes he will just say he said sneer by which he didn’t mean sneerclub but people like us in general. Which is obv not a thing I fully agree with, but good motte/bailey).
It also is interesting, as Scott compared to the others we sneer at never really seems to break containment so to speak. I have seen people talk about Aella on bsky for example, Scott Alexander, Eliezer, lesswrong, EA, etc all come up. But Scott almost never does. (Yes, the ‘untitled’ affair was public, but that was a decade ago, and 6 months before r/sneerclub was created (and long before I joined) and also the whole broaden influence thing as you mentioned). And after all why should he, he is just a random professor, the only reason he is relevant for the broader picture is that he agrees with the AI doom stuff, and he gives the LW people some level of prestige (the only times I have brought him up is because he confirmed that Yarvin spends time personally emailing prestigious people like him but that is about Moldy). He is prob the only one whos sneering is just contained to sneerclub/awful.systems (which is why he should stop reading sc, he prob should also ignore more blog comments and emails (block Yarvin’s email Scott, do it!)).
Lesswronger notices all of the rationalist’s attempts at making an “aligned” AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate
Notably, the author doesn’t realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.
>50 min read >”why company has perverse incentives” >no mention of capitalism
rationalism.mpeg
Others were alarmed and advocated internally against scaling large language models. But these were not AGI safety researchers, but critical AI researchers, like Dr. Timnit Gebru.
Here we see rationalists approaching dangerously close to self-awareness and recognizing their whole concept of “AI safety” as marketing copy.
i just wish that both sides have fun https://awful.systems/post/5488138 article https://ghostarchive.org/archive/dlw78
ah yes, that great mark of certainty and product security, when you have to unleash pitbulls to patrol the completely not dangerous park that everyone can totally feel at ease in
(and of course I bet the damn play is a resource exhaustion attack on critics, isn’t it)
I don’t think it’s a resource exhaustion attack as much as a combination of legitimate paranoia (the consequence of a worldview where only billionaires are capable of actual agency) and attempt to impose that on reality by reverse-astroturfing any opposition by tying it to other billionaire AI bros.
Creator of NaCl publishes something even saltier.
“Am I being detained?” I scream as IETF politely asks me to stop throwing a tantrum over the concept of having moderation policy.
Does somebody have a rundown or something on DJB? All of the tantrum throwing has me confused over what his deal is.
noted for advancements in cryptography, and “stayed impartial” (iirc not quite defending, but also not acknowledging nor distancing) when the jacob appelbaum shit hit wider knowledge
probably about all you need to know in a nutshell
the most recent shit before this when I recall seeing his name pop up was when he was causing slapfight around Kyber (ML-KEM) in the cryptography spaces, but I don’t have links at hand
Great piece on previous hype waves by P. Ball
https://aeon.co/essays/no-suffering-no-death-no-limits-the-nanobots-pipe-dream
It’s sad, my “thoroughly researched” “paper” greygoo-2027 just doesn’t seem to have that viral x-factor that lands me exclusive interviews w/ the Times 🫠
Putting this into the current context of LLMs… Given how Eliezer still repeats the “diamondoid bacteria” line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.
Eliezer came from the extropian newsgroups/mailinglists iirc. So it is quite connected.
Indeed great piece, good to document the older history of that stuff as well.
New Baldur Bjarnason: The melancholy of history rhyming, comparing the AI bubble with the Icelandic banking bubble, and talking about the impending fallout of its burst.
What is the Range Rover in this analogy? A common belief about the 2008 Iceland bubble, which may very well not be true but was widely reported, is that Iceland’s credit was used to buy luxuries like high-end imported cars; when the bubble burst, many folks supposedly committed insurance fraud by deliberately destroying their own cars which they could no longer afford to finance. (I might suggest that credit bubbles are fundamentally distinct from investment bubbles.)
By my guess, the servers and datacentres powering the LLMs will end up as the AI bubble’s Range Rover equivalent - they’re obscenely expensive for AI corps to build and operate, and are practically impossible to finance without VC billions. Once the bubble bursts and the billions stop rolling in, I expect the servers to be sold off for parts and the datacentres to be abandoned.
From the ChatGPT subreddit: Gemini offers to pay me for a developer to fix its mess
Who exactly pays for it? Google? Or does Google send one of their interns to fix the code? Maybe Gemini does have its own bank account. Wow, I really haven’t been keeping up with these advances in agentic AI.
it’s almost as funny as when one time chatbot told vibecoder to learn to code
Out: ‘getting paid in exposure’
In: ‘when you are done, just send your invoice to chatgpt’
Kind of tangential to the sneer sphere. TIL about the Gayfemboy malware
The trigger for activating the backdoor in Gayfemboy is the character string “meowmeow”.
Whisper meow meow to your femboy to get access to his backdoor… is this malware the blackhat equivalent of a shitpost?
Such sofisticated methods, and then they drop crypto miners.
Shamelessly posting link to my skeet thread (skeet trail?) on my experience with an (mandatory) AI chatbot workshop. Nothing that will surprise regulars here too much, but if you want to share the pain…
https://bsky.app/profile/jfranek.bsky.social/post/3lxtdvr4xyc2q
I love it giving the temperature in Europe. Down to a decimal, even.
The blatant covering for the confabulated zip code is some peak boosterism. It knows what an address looks like and that some kind of postal code has to go there, and while it was pretty close I would still expect that to get returned to sender. Pretty close isn’t good enough.
Yeah, didn’t even cross their mind that it could be wrong, because it looked ok.
in what seems to be a very popular theme of “maybe we can just live off defense money” for tech outfits, oura is planning to manufacture in texas for simping to the DoD
I’m struggling to sneer it, it’s so fucking absurd
Found a Pivot to AI candidate in the wild: Pentagon Document: U.S. Wants to “Suppress Dissenting Arguments” Using AI Propaganda
I also found a call for ethics training in engineering, and someone’s horror story about ethics training alongside it.
Kind of generic: I am a researcher and recently started a third party funded project where I won’t teach for a while. I kinda dread what garbage fire I’ll return to in a couple of years when I teach again, how much AI slop will be established on the sides of teachers and students.
DragonCon drops the ban hammer on a slop slinger. There was much rejoicing.
Btw, the vibes were absolutely marvelous this year.
Edit: a shrine was built to shame the perpetrator
https://old.reddit.com/r/dragoncon/comments/1n60s10/to_shame_that_ai_stand_in_artist_alley_people/