Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Whichever one of you did https://alignmentalignment.ai/caaac/jobs, well done, and many lols.
CAAAC is an open, dynamic, inclusive environment, where all perspectives are welcomed as long as you believe AGI will annihilate all humans in the next six months.
Alright, I can pretend to believe that, go on…
We offer competitive salaries and generous benefits, including no performance management because we have no way to assess whether the work you do is at all useful.
Incredible. I hope I get the job!
Make sure to click the “Apply Now” button at the bottom for a special treat.
Not really any more insane than any other white-collar corporate job I’ve seen in my life
Less insane and more honest, truly
Some poor souls who arguably have their hearts in the right place definitely don’t have their heads screwed on right, and are trying to do hunger strikes outside Google’s AI offices and Anthropic’s offices.
https://programming.dev/post/37056928 contains links to a few posts on X by the folks doing it.
Imagine being so worried about AGI that you thought it was worth starving yourself over.
Now imagine feeling that strongly about it and not stopping to ask why none of the ideologues who originally sounded the alarm bells about it have tried anything even remotely as drastic.
On top of all that, imagine being this worried about what Anthropic and Google are doing in the research of AI, hopefully being aware of Google’s military contracts, and somehow thinking they give a singular shit if you kill yourself over this.
And… where’s the people outside fucking OpenAI? Bets on this being some corporate shadowplay shit?
I mean, I try not to go full conspiratorial everything-is-a-false-fllag, but the fact that the biggest AI company that has been explicitly trying to create AGI isn’t getting the business here is incredibly suspect. On the other hand, though, it feels like anything that publicly leans into the fears of evil computer God would be a self-own when they’re in the middle of trying to completely ditch the “for the good of humanity, not just immediate profits” part of their organization.
Didn’t OpenAI just file court documents claiming that their opposition is funded by competitors? Accusing someone else of what they themselves are doing seems to be a pretty popular strategy these days.
@bigfondue @YourNetworkIsHaunted every accusation is a confession!
I dont know anything about the locations of any offices, but could it he that openAI just didnt have any local places? Asking them why not all worked ld be a good journalist question
But otoh it is just
twothree of them, and the second ones photo gives off a weird vibe. Why is he smiling like it is a joke?It’s two guys in London and one guy in San Francisco. In London there’s presumably no OpenAI office, in SF, you can’t be at two places at once and Anthropic has more true believers/does more critihype.
Unrelated, few minutes before writing this a bona-fide cultist replied to the programming dev post. Cultist with the handle “BussyGyatt @feddit.org”. Truly the dumbest timeline.
Lol at the critihype from the BussyGyatt person in that post. Come on LLMs will not become AGI, and that leaves LLMs ‘predicting shutdowns will be bad for their goals’ which is only so because people like this have kept saying it an the LLMs have trained on it. If you really worried about this, you’d stop feeding that data to it.
This feels like a symptom of liberals having a diluted incomplete understanding of what made past movements that utilized protest succeed or fail.
This is always what you get when your fundamental belief is “capitalism good”, so no matter how close you get to “and the problem is capitalism” you can never actually get there, like in a crazy version of edging.
What I’m saying is that libs are philosophical gooners
Word of warning, if you are squeemish, Charlie Kirk got shot, and closeup videos of him getting shot are over social media and people have not put warnings up. Managed to avoid it myself, but careful with autoplaying gifs, and use this link https://bsky.app/profile/nycsouthpaw.bsky.social/post/3lyiw3vi3yc2c for instructions how to turn that off.
E: and it was a groyper, stop your bets ladies and gentlebots. The winner of the bet is, acausalrobotgod! Again! What a predictive streak. (E2: im just making a joke here on betting markets and how ACRG would win them all because acausal magic, don’t bet on peoples lives and don’t use betting markets).
E: while im editing this post with corrections, turns out that Kirk was a lot worse than I thought and he wasn’t just a garden variety propagandist like a Shapiro/Crowder etc. (yeah I know they both also try to branch out into more media). He was doing a lot of shit work on the ground, and one of those guys who was really influential with younger people. Read that this might also explain the extreme rightwing reaction, as he was seen by some as a next big political player. To me, very online leftwinger from europe, he just was the worst joke from a line of other propagandists, but I was wrong on that.
Thoughts and prayers, definitely no curses.
I have spend the last few hours looking through all my social media posts making sure I delete any bad things I have said about witches.
The covens appreciate the overture.
Hecate left no crumbs
Irrationally annoyed at yanks incorrecting each other how this kind of shot could only be pulled off by a trained expert sniper. The Behind the Bastards guy agrees, but the replies are still stuffed with examples.
I know from experience that even mediocre conscripts shooting a gun for the first time in their life usually manage to land hits in a one foot diameter circle from 150 metres with iron sights on an intermediate catridge rifle. It doesn’t take an elite marksman to hit a sitting man from 200 yards away, especially with a scope. Even if nervous and high on adrenaline, an average hunter, target shooter or (ex) military type would be more likely than not to hit a target of that size at that distance, assuming otherwise decent conditions.
Hell, the factory sights on an M16 are supposed to be set for zero elevation at 250 metres and the effective range for most assault rifles and their semi auto civilian variants is around 300 metres. To say you need to be a trained sniper to make this shot is like saying you need to be a professional racing driver to do 80 mph on a highway.
If there’s one thing you’d assume seppos know well, it’s shooting firearms, but some people still can’t help but spout dumb bullshit.
Yeah, I think the whole ‘quickly getting away without leaving a trace’ is more sign of some training. But, this is reduced in being likely by the whole FBI just being filled with incompetent goons, podcasters, racists, who are now playing ‘find the person of color’ and not doing the actual FBI things.
I mean I have no high opinion of groypers, so I don’t think they are well trained soldiers or something so take that into account. Exmil just has a higher willingness to kill people.
But yes, very good to point out thanks. Esp as Evans has experience in these things, and can generally be relied upon. (Apart from his views on 40k being too rosy).
We can prob rule out a Luigi style killer as there was no shellcasing with words on it found as far as I can tell.
Even in Luigi’s case, it took days to catch a guy who (allegedly, lol) killed a billionaire in the middle of Manhattan. Though very dashing, Mangione hardly seems like a navyseal elite supersoldier ninja hitman 47 either. Whatever Kirk’s killer’s motive, I see little reason to assume it was necessarily a pro job. All this tells us is that overfunding and militarization doesn’t equate to a more competent police force.
Yeah, and he had motive because of his intense pain, still a weird Rationalist adjacent type guy was not a likely subject. And sadly we knew the latter already.
It has been long long ago, but I recall that when London put CCTVs everywhere the chief of police there admitted it had not caused a drop in crime not helped them catch criminals. But he still wanted to place more of them and more money. Just a strange sort of welfare project for pervert police people.
E: and look all that police power, and he gets caught be cause he admitted it to his dad.
also, I was at Bible study with Luigi at the time a CEO coincidentally had a bullet-involved collision, as were you
During the lead of usual speed incident?
Yep we were, we were talking about hiring jetskis. Strange how they never used the pictures of him on the jetski in the news like how they do with all the white guys who shoot up a place. Wait, that means they know he was innocent just like we do.
Reports are shell cases with messages have been found. My money is still on it being a false flag.
Yeah, to me the whole finding of casings makes no sense. Like if you put a message on them, leave them there. (also bolt your gun if you might want to shoot it again). Just odd. Also odd they don’t mention the messages on them directly. Which normally get plastered all over social media. (Which is bad btw, don’t spread extremist messaging without context, I really hope they finally learned this, and this isn’t a case of them hiding the message to make it sound more extreme than it was).
But yes, I spoke too soon. Anyway, as I’m the negative Cassandra, let me say this. We can rule out it being a Rationalist, we have not found a copy of HPMOR on the scene as far as I can tell.
E: update on the writing, might be wrong
Even if we had the shooter may have just been looking for something heavy enough to brace against for the shot. You can’t exactly carry a set of encyclopedias up to the top of the book depository or wherever without attracting attention.
You can’t exactly carry a set of encyclopedias up to the top of the book depository or wherever without attracting attention.
Not sure about that, I don’t want to talk about specifics as I rather not have people know exactly where I live but you could do a lot on the roofs of our uni without people noticing. Doors were not even locked.
But it makes sense for a shooter to bring something they know the hight off, like their personally signed copy of HPMOR.
LOL ofc I spoke too soon, the “messages” were apparently random arrows
I’m hearing that the “messages” are mundane manufacturer markings misinterpreted
Fyi that manufacturer does not make 30 06 ammo. So it isnt true. Story goes that the source for the whole messages story is Crowder, that he dropped it before the mainstream news. Not a reliable source, wtf media.
E: turns out I was wrong sorry about that. But due to the casings we now know he was likely enough to be a groyper for me to call him one. I’ll update the first post with a link to my skeet about it. But the song he used was in a ‘groyper war’ playlist. The song didn’t make sense to me anyway.
crowder just mad that the title on the tent was “prove me wrong” and not “change my mind”
“One of our guys was murdered; it could have only been an elite Navy SEAL with over 300 confirmed kills!!!” them, probably
Don’t worry, Kelsey Piper managed to use it as an opportunity to be a bluecheck dipshit.
(via)
Yeah and missing the point that a lot of people think ‘some ideas are so shit they shouldn’t be debated the right way’ esp not ideas about others who are not you and stuff we have debated (and fought over) again and again already. It is like the IRA, Kirk needs to be lucky once, trans people/minorities/university professors(*) need to be lucky every time, again and again, forever. This is just the dumbest level of object level vs meta level thinking out there. When somebody enters the marketplace with sawdust they claim is flour every week you toss them out, don’t let them in next time. And you should notice they only try to sell this ‘flour’ to the poor.
*: my second point, Piper is also very full of shit here. You know what happened to uni professors who debated/corrected/were women/poc in his space? They got put on the TPUSA watchlist and got death and rape threats. He was part of the organisation who ran that database. So this imagined idealized platonic Kirk shaped object isn’t even real. Hell you can see in debates the tricks he pulls so his side thinks he wins. That creepy smile he makes after a point that he knows is bullshit, that is to knock people off balance so people react oddly (as it is a really creepy smile) and don’t react to the dumb shit he just said. See this clip for example: https://www.youtube.com/shorts/Jrsy1aB6OSk (it cuts off before she talks about how creepy his smile is). Note he is lying about the fetus thing, and if this wasn’t a high stress debate and people got time to think about things they would just reply with ‘so a dolphin fetus is a dolphin little human being?’. But that is professional debaters vs college students for you. This is not a man who was interested in truth or having his mind changed. And this smile is not some incident, he does it in other places. But his audience is preprimed on thinking they are the logic and reason people, so they consider somebody going ‘what the fuck did you just do with your face’ as a way to dodge the argument, so in their eyes this is a win for him.
I find that all these debatebro types have some of these tricks, in addition to often controlling the mic. Peterson refuses to explain, Shapiro demands you accept his premise from the start ‘for the argument’ (yeah if you are right you are right, but the debate is about if you are right not ‘assume im right’). The ‘I refuse your question’ guy is simply somebody who noticed one of those debatebro tricks and refused to go along with it. (the patron saint, may your hair remain long, and your bluetooth have good connections forever). Note that this prob doesn’t work on the more experienced debatebros as they would have prepared an answer for ‘why can’t we both have lgbt people and economic stability’ and it gives them a way to spout hate and misinfo again. As all work is skilled work, being a bad faith debate bro propagandist is also a skill (which the Koch brothers paid Kirk well for).
Related to this “Those who walk away from debatebros”.
It’s also yet another case of privileged people somehow failing to understand that there is no “right” way to advocate for tearing families apart or purging people who don’t fit your ideal. Like, I’m not a fan of political violence but I’m also not going to act like this asshole was any less part of the problem because he ostensibly believed in respecting the state’s monopoly on violence as he advocated for that violence to be used against me and mine.
I’m at least enjoying the many comments calling her out, but damn she just doubles down even after being given many many examples of him being a far-right nationalist monster who engaged in attempts to outright subvert democracy.
Piper’s self-described many unpopular beliefs that the rest of society considers loathsome
If Piper ever starts to publish essays on what goals and policy positions she thinks make her or SlateScott “sincere centre-leftists” they are going to be a trip. Just the explanation why she feels more comfortable saying what she believes under Robert F. Kennedy Jr. and Big Balls than under Biden’s centrist technocrats would require a few doses of my favourite substance to get through.
Edit: I would also love to hear “so you agree that the talking head on Fox News who suggested executing the homeless is despicable, what about your friend Scott Alexander proposing to sterilize the poor and substance users before they receive help?”
JFC if he dies he’s the new Horst Wessel
According to this https://bsky.app/profile/esqueer.net/post/3lyj2o4cuvc2g , and I think a fox news news report he is dead.
He was shot while he was trying to blame gun violence on trans people and gang violence basically. In a moment of weird poetry.
But I just posted this here to warn people. A tip if you see something traumatic: https://bsky.app/profile/baroness.bsky.social/post/3lyj32qx2os2f go play some videogames to not sear it into your mind.
Anyway, doubt they will make the diaper guy the new Horst Wessel.
E: a awful.systems relevant lol: https://bsky.app/profile/lasergiant.bsky.social/post/3lyixmkzfuc2r
he also said that kids should watch public executions https://www.newsweek.com/charlie-kirk-death-penalty-public-executions-1873073 and was shot directly under giant banner that reads “PROVE ME WRONG”
technically it’s a school shooting so not sure why conservatives care at all about it. he died like he lived, moving goalposts
Yep, he sucked. Big question I have is why. He is a propagandist in the ‘free speech on campus’ wars, and they won the battle and unis are caving and banning free speech. Of all the targets he makes no sense to me (not that any of them do to me, assassination has a very poor track record, people still whine about Pim in the Netherlands, and pretend all violence comes from the left (despite reality proving otherwise)).
So considering his history with groypers, who got big partially attacking Kirk for not being extreme enough, wouldn’t be shocked if it is a groyper. Or an exmil who lost a kid in a shooting.
I mean why shoot the guy that looks like a budget steven crowder and a broken stretchy arms toy doll had a teleporter accident.
Imagine if it turns out that it wasn’t even politically motivated and he just owed a bunch of money or something.
i didn’t even thought he was relevant anymore since 2020, he was like 2017 era chud debatelord who relied entirely on gotchas. he did toe the party line tho, so maybe some divergent rightwinger didn’t like what he was doing for whatever reason. maybe it was situation like with trump shooter, probable republican that didn’t particularly like epstein link, or whatever else cause there might be involved
I don’t think he has been relevant, he was on some 1 X debates 20 !X shows, where he showed the creepiest smile after being wrong about latin, and that was it. For the relevance of things I heard about him.
Turns out he was relevant to the right whoever. As he apparently kept pushing the epstein link (I thought he was dismissing the epstein link) so much so that Laura Loomer decided she never wanted to hear from the guy a few days ago.
Just very weird from what I can tell of the people of his ilk, for too online leftwingers he was a joke. Compared to a shapiro/crowder.
huh i honestly had no idea, and i guessed he was very replaceable in rw propaganda machine. i heard somewhere that shapiro was also supposed to be there
We’re already seeing some of the less circumspect fash call this their Reichstag Fire.
The video’s also all over Twitter, thanks to nonexistent moderation.
looks like we’ve got us our own superpredictor
But I just met 'er!
The story is awful enough, but the headline is making me want to punch someone.
https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly
I genuinely thought therapists were gonna avoid the psychosis-inducing suicide machine after seeing it cause psychosis and suicide. Clearly, I was being too optimistic.
nah they’re built different
The future is now, and it is awful. Would any still wonder why, I grow so ever mournful.
irl winced at this
Yeah, that headline and its writer can kick rocks.
Apparently the hacker who publicized a copy of the no fly list was leaked an article containing Yarvin’s home address, which she promptly posted on bluesky. Won’t link because I don’t think we’ve had the doxxing discussion but It’s easily findable now.
I’m mostly posting this because the article featured this photo:
I was curious so I dug up the post and then checked property prices for the neighbourhood
$2.6~4.8m
being thiel’s idea guy seems to pay pretty well
Certainly a lot more money in that, than scouring github for code you can put into urbit.
The Wall Street Journal came out with a story on “conspiracy physics”, noting Eric Weinstein and Sabine Hossenfelder as examples. Sadly, one of their quoted voices of sanity is Scott Aaronson, baking-soda volcano of genocide apologism.
Somehow,
Palpatine returnedScott came off as a voice of reasonBehold the power of this fully selective quotation.
Being compared to whackjobs with a worse grip on reality than him definitely helped.
Since appearing on Piers Morgan’s show, Eric Weinstein has taken to expounding additional theories about physics. Peer review was created by the government, working with Ghislaine Maxwell’s father, to control science, he said on “Diary of a CEO,” one of the world’s most popular podcasts. Jeffrey Epstein was sent by an intelligence agency to throw physics off track and discourage space exploration, keeping humanity trapped in “the prison built by Einstein.”
Heartbreaking! Weinstein isn’t fully wrong. Maxwell’s daddy was Robert Maxwell, who did indeed have a major role in making Springer big and kickstarting the publish-or-perish model, in addition to having incredibly tight Mossad ties; the corresponding Behind the Bastards episodes are subtitled “how Ghislane Maxwell’s dad ruined science.” Epstein has been accused of being a Mossad asset tasked with seeking out influential scientists like Marvin Minsky to secure evidence for blackmail and damage their reputations. As they say on Reddit, everybody sucks here.
That’s just yer bog-standard “the best lie has a seed of truth”, ainnit?
(Peer review in its modern form was adopted gradually, with a recognizable example in 1831 from the same William Whewell who coined the word scientist. It displaced the tradition of having the editor of a journal decide everything himself, so whatever its flaws, it has broadened the diversity of voices that influence what gets officially published.)
Epstein was a sophon controlled trisolaran asset working to prevent crucial development of physics!!! j/k
First domino: US government invents peer review
Last domino: Richard Stallman successfully kamikazes his reputation for good after multiple close attempts over the years
Richard Stallman successfully kamikazes his reputation for good after multiple close attempts over the years
He still maintains a solid reputation with FOSS freaks, fascists and pedophiles to this day. Given the Venn diagram of these three groups is a circle, this isn’t particularly shocking.
Thanks man, I love being lumped in with fascists and pedos.
I asked the other day whether they’ve actually spoken with these people that they keep posting such takes about, and thus far my presumption is “they haven’t”. posts like the above reinforce that view
I often love your stuff, but this ain’t it.
Fuck, your lack of history is depressing sometimes. That Venn diagram is well-pointed, even among people who have met RMS, and the various factions do not get along with each other. For a taste, previously on Lobsters you can see an avowed FLOSS communist ripping the mask off of a Suckless cryptofascist in response to a video posted by a recently-banned alt-right debate-starter.
Education report calling for ethical AI use contains over 15 fake sources
womp, and wait for it, womp
The report claims its about ethical AI use, but all I see is evidence that AI is inherently unethical, and an argument for banning AI from education forever.
university where the professor physically threatened me and plagiarized my work called to ask if i was willing to teach a notoriously hard computer science class (that i have taught before to stellar evals as a phd student[1]). but they had to tell me that i was their last choice because they couldn’t find a full professor to teach it (since i didn’t finish my phd there because of said abusive professor). on top of that, they offered me a measly $6,000 usd for the entire semester with no benefits, and i would have to pay $500 for parking.
should i just be done with academia? enrollment deadlines for the spring are approaching and i’m wondering if i should just find a “regular job”, rather than finishing a PhD elsewhere, especially given the direction higher ed is going in the us.
evals are bullshit for measuring how well students actually learn anything, but are great for measuring the stupid shit business idiots love, like whether students will keep paying tuition. also they can be used to explain the pitfalls of using likert scales carelessly, as business idiots do. ↩︎
Every time I learn one single thing about how academia works in the USA I want to commit unspeakable acts of violence
imo academia in the us selects for self-serious mini-tyrants. people unable to sleaze their way to an equivalent position in the job market, so they go with a position where their subordinates have no agency or recourse for inappropriate behavior and are entirely dependent on them for their funding and their career progress[1]. the university system here seems to only care if a professor brings in grant money — if so, they can do whatever they want.
e.g. a year ago a student of mine with a documented, legally required accommodation was just straight denied it by a professor in my department. this poor student had to go through some equal opportunity claim just to have his accommodation met, and the professor is still there, having suffered no consequences.
maybe i should apply to a phd program outside of the us. i speak spanish (but slowly, y como un gringo) 🤷
😭
by standing up to my piece of shit advisor, i have left with just a master’s degree after five years, despite definitely having enough research completed under this advisor to leave with a phd. i chose to stand up for myself and try to end this cycle instead of adding an acronym to my title. arguably a mistake, idk, there is a robert frost poem about this. ↩︎
I got paid $2700 for teaching a semester. Adjuncting sucks. It doesn’t get better.
Even if you finished, all the postdocs I know had their NSF starter kit yanked.
sorry my friend, that is really brutal. wishing you the best. ❤
simon willison, the self-styled reasonable ai researcher, finds it hilarious and a good use of money throwing $14000 at claude to create an useless programming language that doesn’t work.
good man simon willison!
I mean it’s still just funny money seeing the creator works for some company that resells tokens from Claude, but very few people are stepping back to note the drastically reduced expectations of LLMs. A year ago, it would have been plausible to claim that a future LLM could design a language from scratch. Now we have a rancid mess of slop, and it’s an “art project”, and the fact it’s ersatz internally coherent is treated as a great success.
Willison should just have let this go, because it’s a ludicrous example of GenAI, but he just can’t help himself defending this crap.
That’s a good point that I’m not sure many people are talking about. There’s a shift happening where while I’m still seeing way too much “you’re just not prompting it right”, it has lessened. Now I’m seeing a lot more “well of course it can’t do that” even from the believers.
They’re still crying “the problem is that you’re using it wrong”, and blaming on the end user but it seems to be quietly shifting so they’re now calling people dumb for ever believing the hype that they were the ones pushing to begin with.
Good sneer from user andrewrk:
People are always saying things like, “surprisingly good” to describe LLM output, but that’s like when 5 year old stops scribbling on the walls and draws a “surprisingly good” picture of the house, family, and dog standing outside on a sunny day on some construction paper. That’s great, kiddo, let’s put your programming language right here on the fridge.
Top-tier from Willison himself:
The learning isn’t in studying the finished product, it’s in watching how it gets there.
Mate, if that’s true, my years of Gentoo experience watching compiler commands fly past in the terminal means I’m a senior operating system architect.
which naturally leads us to: having to fix a portage overlay ~= “compiler engineer”
wonder what simonw’s total spend (direct and indirect) in this shit has been to date. maybe sunk cost fallacy is an unstated/un(der?)accounted part in his True Believer thing?
maybe sunk cost fallacy is an unstated/un(der?)accounted part in his True Believer thing?
Probably. Beyond throwing a shitload of cash into the LLM money pit, Willison’s completely wrapped his public image up in being an AI booster, having spent years advocating for AI and “learning” how to use it.
If he admits he’s wrong about LLMs, he has to admit the money and time he spent on AI was all for nothing.
if you call him an AI promoter he cites his carefully organised blog posts of concerns
meanwhile he was on the early access list for GPT-5
he’s claiming he is taking no llm money with exception of specific cases, but he does accept api credits and access to early releases, which aren’t payments only when you think of payments in extremely narrow sense of real money being exchanged.
this would in no way stand if he were, say, a journalist.
Sigh. Love how he claims it’s worth it for “learning”…
We already have a thing for learning, it’s called “books”, and if you want to learn compiler basics, $14000 could buy you hundreds of copies of the dragon book.
$14,000 could probably still buy you a lesser Porsche in decent shape, but we should praise this brave pioneer for valuing experiences over things, especially at the all-important boundary of human/machine integration!
(no, I’m not bitter at missing the depreciation nadir for 996-era 911s, what are you talking about)
I’ve learned so much langdesign and stuff over the years simply by hanging around plt nerds, didn’t even need to spend for a single dragon book!
(although I probably have a samizdat copy of it somewhere)
“Engineering a Compiler” is a better read than the Dragon Book anyway
oi no spoilers ;p
Spoiler: there are no actual dragons in either book 😞
First there’s no actual wizards in my computer to install and setup software, and now this? If someone tells me that Roko doesn’t even have an actual basilisk I don’t know what I’m gonna do!
That the useless programming language is literally called “cursed” is oddly fitting, because the continued existence of LLMs is a curse upon all of humanity
OT: in happier news thats now doomed to be totally overshadowed: https://www.cbc.ca/news/science/mars-potential-life-1.7630035
(I’m a massive astrobio/SETI nerd)
Can you recommend any resources on those topics?
I can drop some books when I get off work! The SETI subreddit is pretty good about posting the latest studies too.
https://www.amazon.ca/Elusive-Wow-Searching-Extraterrestrial-Intelligence/dp/0983958440 https://www.amazon.ca/Astrobiology-Understanding-Universe-Charles-Cockell/dp/1119550351 https://mitpress.mit.edu/9780262548649/extraterrestrial-languages/
These are all good. Elusive Wow might be harder to find, but its worth looking up the authors career (it was his hobby).
I’ve noticed Rationalists otoh tend to repeat whatever Anders Sandberg says and are ideologically wedded to a dead universe view of things.
Re dead universe.
That fixes the great filter problem for them.
(massive bong rip) the aliens already came here and put us all in the Matrix, dude
exhales not sure this is weed dude.
These people are no fun at all.
Considering other life might also be unattractive as it could reduce the feeling of their own importance.
When it started in ’06, this blog was near the center of the origin of a “rationalist” movement, wherein idealistic youths tried to adapt rational styles and methods. While these habits did often impress, and bond this community together, they alas came to trust that their leaders had in fact achieved unusual rationality, and on that basis embraced many contrarian but not especially rational conclusions of those leaders. - Robin Hanson, 2025
I hear that even though Yud started blogging on his site, and even though George Mason University type economics is trendy with EA and LessWrong, Hanson never identified himself with EA or LessWrong as movements. So this is like Gabriele D’Annunzio insisting he is a nationalist not a fascist, not Nicholas Taleb denouncing phrenology.
He had me in the first half, I thought he was calling out rationalist’s problems (even if dishonestly disassociating himself from then). But then his recommended solution was prediction markets (a concept which rationalists have in fact been trying to play around with, albeit at a toy model level with fake money).
Also a concept that Scott Aaronson praised Hanson for.
(Crediting the “Great Filter” to Hanson, like Scott Computers there, sounds like some fuckin’ bullshit to me. In Cosmos, Carl Sagan wrote, “Why are they not here? There are many possible answers. Although it runs contrary to the heritage of Aristarchus and Copernicus, perhaps we are the first. Some technical civilization must be the first to emerge in the history of the Galaxy. Perhaps we are mistaken in our belief that at least occasional civilizations avoid self-destruction.” And in his discussion of abiogenesis: “Life had arisen almost immediately after the origin of the Earth, which suggests that life may be an inevitable chemical process on an Earth-like planet. But life did not evolve beyond blue-green algae for three billion years, which suggests that large lifeforms with specialized organs are hard to evolve, harder even than the origin of life. Perhaps there are many other planets that today have abundant microbes but no big beasts and vegetables.” Boom! There it is, in only the most successful pop-science book of the century.)
Most famously, Robin is […] also the inventor of futarchy
A futarchy, you say? Tell me more, Robin Hanson
I noticed that Hanson speculated that “most of the Great Filter is most likely to be explained by […] the steps in the biological evolution of life and intelligence”, and then lied by omission about Sagan’s position. He said that Sagan appealed to “social science” and believed that the winnowing effect is civilizations blowing themselves up with nukes. He cites an obscure paper from 1983, while ignoring the, again, most successful pop-science book of the century.
Honestly Hanson is so awful the rationalists almost make him look better by association.
He’s the one that used the phrase “silent gentle rape”? Yeah, he’s at least as bad as the worst evo-psych pseudoscience misogyny posted on lesswrong, with the added twist he has a position in academia to lend him more legitimacy.
I started reading his post with that title to refresh myself. Just to get your feet wet:
DEC 01, 2010
Added Oct ’13: <insert content warning here>
Man, what happened in the three years it took for a content warning?
Anyway I skimmed it, the rest of the post is a huge pile of shit that I don’t want to read any more of, I’m sure it’s been picked apart already. But JFC.
The Manifest networking event in Berkeley combines prediction markets, race cranks, EA, and LessWrong. Scott Alexander likes prediction markets, does Yud?
To add to blakestacey’s answer, his fictional worldbuilding concept, dath ilan (which he treats like rigorous academic work to the point of citing it in tweets), uses prediction markets in basically everything, from setting government policy to healthcare plans to deciding what restaurant to eat at.
“We predescribed our methodology in enough advance detail for Polymarket to run a real-money prediction market, and traders trusted us enough for the market to be liquid” would be overwhelmingly more credible than “we published our results in a big-name science journal”.
So Hanson is dissing one of the few movements that supports his pet contrarian policy? After the Defence Department lost interest the only people who like prediction markets seem to be LessWrongers / EAs / tech libertarians / crypto bros / worshippers of Friend Computer.
the only people who like prediction markets […]
Apparently Donald Trump Jr. has found his way into the payroll of a couple of the bigger prediction markets, so they seem to be doing their darndest to change that.
Every tweet in that thread is sneerable. Either from failing to understand the current scientific process, vastly overestimating how easily cutting edge can be turned into cleanly resolvable predictions, or assuming prediction markets are magic.
Pretty easy to look at actually-existing instances and note just how laughable "traders trusted us enough for the market to be liquid” is.
This is just another data point begging what I believe to be the most important question an American can ask themselves right now: why be a sucker?
assuming prediction markets are magic
Bet it’s more like assuming it will incentivize people with magical predicting genes to reproduce more so we can get a kwisatz haderach to fight AI down the line.
It’s always dumber than expected.
I deeply regret I have made posts proclaiming LessWrong as amazing, in the past.
They do still have a decent article here and there, but that’s like digging for strawberries in a pile of shit. Even if you find one, it won’t be great.
We have some threads of Vaccinations in Book/Article Form which try to share good pop science and textbooks without the cult shit and Dunning-Kruger. People who think they know everything and are mysteriously underemployed tend to have the most time to post though.
It is pretty good as a source for science fiction ideas. I mean, lots of their ideas originate from science fiction, but their original ideas would make fun fantasy sci-fi concepts. Like looking off their current front page… https://www.lesswrong.com/posts/WLFRkm3PhJ3Ty27QH/the-cats-are-on-to-something cat’s deliberately latching on to humans as the most lazy way of advancing their own value across the future seems like a solid point of fantasy worldworldbuilding…
GoToSocial recently put up a code of conduct that openly barred AI-“assisted” changes and fascist/capitalist involvement, prompting some concern trolling on the red site.
Got a promptfondler trying to paint basic human decency as ridiculous, and a Concerned Individualtm who’s pissed at GoToSocial refusing to become a Nazi bar.
the funny thing is we’ve had that CoC for literally years https://codeberg.org/superseriousbusiness/gotosocial/pulls/2090
it only kicked-off as we kicked a user off our repo for trying to submit LLM-generated code, prompting the icecubes client dev to throw a shit-fit over it.
Yet another example of how acceptance of GenAI is increasingly coded as right wing
It also lead to this: https://mastodon.social/@dimillian/115151813462684558
New Loser Lanyard (ironically called the Friend) just dropped, a “chatbot-enabled” necklace which invades everyone’s privacy and provides Internet reply “commentary” in response. As if to underline its sheer shittiness, WIRED has reported that even other promptfondlers are repulsed by it, in a scathing review that accidentally sneers its techbro shithead inventor:
If you’re looking for some quick schadenfreude, here’s the quotes on Bluesky.
Should’ve called it The Adversary
Nah, call it the PvP Tag.
These things look dorky as fuck, wearing them is a moral failing, and people (rightfully) treat it as grounds to shit on you, might as well lean into the “shithead nerd who ruined everything” vibe with some gratuitous gaming terminology, too.
Damn, I was hoping someone would finally recognize the true value of the ASCII-art goatse that used to show up on Slashdot all the time, before this inevitably came to pass
Now that his new book is out, Big Yud is on the interview circuit. I hope everyone is prepared for a lot of annoying articles in the next few weeks.
Today he was on the Hard Fork podcast with Kevin Roose and Casey Newton (didn’t listen to it yet). There’s also a milquetoast profile in the NYT written by Kevin Roose, where Roose admits his P(doom) is between 5 and 10 percent.
Siskind did a review too, basically gives it the ‘their hearts in the right place but… [read AI2027 instead]’ treatment. Then they go at it a bit with Yud in the comments where Yud comes off as a bitter dick, but their actual disagreements are just filioque shit. Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.
https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154920454
https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154927504
Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it rather than being scared shitless of MAD, so AI non-proliferation by presumably appointing a rationalist Grand Inquisitor in charge of all human scientific progress is an obvious solution.
Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it
This is his claim about everything, including how we got gay rights. Real if all you have is a hammer stuff.
Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.
This century deserves a better class of thought-criminal
Yud: “That’s not going to asymptote to a great final answer if you just run them for longer.”
Asymptote is a noun, you git. I know in the grand scheme of things this is a trivial thing to be annoyed by, but what is it it with Yud’s weird tendency to verbify nouns? Most rationalists seem to emulate him on this. It’s like a cult signifier.
It’s also inherently-begging-the-question-silly, like it assumes that the Ideal of Alignment™, can never be reached but only approached. (I verb nouns quite often so I have to be more picky at what I get annoyed at)
They think Yud is a world-historical intellect (I’ve seen claims on twitter he has a iq of 190 - yeah really) and by emulation a little of the old smartness can rub off on them.
The normal max if an iq test is ~160 and from what I can tell nobody tests above it basically because it is not relevant. (And I assume testing problems and variance become to big statistical problems at this level). Not even sure how rare a 190 iq would be statistically, prob laughably rare.
I don’t think these people have a good handle on how stuff actually works.
For a snicker I looked it up: https://iqcomparisonsite.com/iqtable.aspx
One in 100 million. So he would be in the top 80 smartest people alive right now. Which includes third world, children, elderly etc.
190IQ is when you verb asymptote to avoid saying ‘almost’.
that’s in practical terms meaningless, but just looking at statistics of it iq on the order of 190 would mean 1 in billion (1E9) per ever reliable rationalwiki https://rationalwiki.org/wiki/High_IQ_society
It’s possible someone specifically picked the highest IQ that wouldn’t need a second planet earth to make the statistics work.
assuming that nuclear nonproliferation is gonna hold up indefinitely for any reason is some real fukuyama’s end of history shit
let alone “because it’s Rational™ thing to do”, it’s only in rational interest of already-nuclear states to keep things this way. couple of states that could make a good point for having nuclear arsenal and having capability to manufacture it are effectively dissuaded from this by american diplomacy (mostly nuclear umbrella for allies and sanctions or fucking with their facilities for enemies). with demented pedo in chief and his idiot underlings trying their hardest to undo this all, i really wouldn’t be surprised if, say, south korea decides to get nuclear
Trying to figure out if that Siskind take comes from a) a lack of criticality and/or the ability to read subtext or b) some ideological agenda to erase the role of violence (threats of violence are also violence!) in change happening, or both
I mean, he has admitted to believing a bunch of eugenics-y things that would be inarguably terrible if implemented by force, and maintaining this weird fig leaf that we could do it all voluntarily feels less blatantly dystopian. He has also admitted to being dishonest about his actual beliefs in his writing in order to advance those ideas, presumably in the same way that hardcore neonazis viewed Alex Jones and the “globalists” crowd as useful stepping stones for getting people into the right conspiratorial mindset so that they can just shift the target from “globalists” to “the Jews”.
When you are running a con like crypto or chatbot companies, it helps to know someone who is utterly naive and can’t stop talking about whatever line you feed him. If this were the middle ages Kevin Roose would have an excellent collection of pigges bones and scraps of linen that the nice friar promised were relics of St Margaret of Antioch.