Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Previous week

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 minutes ago

    Excerpt from the new Bender / Hanna book, AI Hype Is the Product and Everyone’s Buying It :

    OpenAI alums cofounded Anthropic, a company solely focused on creating generative AI tools, and received $580 million in an investment round led by crypto-scammer Sam Bankman-Fried.

    Just wondering, but what ever happened to those shares of Anthropic that SBF bought? Was it part of FTX (and the bankruptcy), or did he buy it himself and still holds them in prison? Or have they just been diluted to zero at this point anyway?

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 day ago

    Can anyone explain to me why tf do promptfondlers hate GPT5 in non-crazy terms? Actually I have a whole list of questions related to this, I feel like I completely lost any connection to this discourse at this point:

    1. Is GPT5 “worse” in any sensible definition of the word? I’ve long complained that there is no good scientific metric to grade those on but like, it can count 'r’s in “strawberry” so I thought it’s supposed to be nominally better?
    2. Why doesn’t OpenAI simply allow users to use the old model (4o I think?) It sounds like the simplest thing to do.
    3. Do we know if OpenAI actually changed something? Is the model different in any interesting way?
    4. Bonus question: what the fuck is wrong with OpenAI’s naming scheme? 4, then 4o? And there’s also o4 that’s something else??
    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 hours ago

      I don’t have any real input from prompfondlers, as I don’t think I follow enough of them to get a real feeling of them. I did find it interesting that I saw on bsky just now somebody claim that LLMs hallucinate a lot less and that anti-AI people are not taking that into account, and somebody else posting research showing that hallucinations are now harder to spot. (It made up actual real references to thinks, aka works that really exist, only the thing the LLM references wasn’t in the actual reference). Which was a bit odd to see. (It does make me suspect ‘it hallucinates less’ is them just working out special exceptions for every popular hallucination we see, and not a structural fixing of the hallucination problem (which I think is prob not solvable)).

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      21 hours ago
      1. from what i can tell people who roleplayed bf/gf with the idiot box aka grew parasocial relationship with idiot box did that on 4o, and now they can’t make it work on 5 so they got big mad
      2. i think it’s only if they pay up 200$/mo, previously it was probably available at lower tiers
      3. yeah they might have found a way to blow money faster somehow https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-5-power-consumption-could-be-as-much-as-eight-times-higher-than-gpt-4-research-institute-estimates-medium-sized-gpt-5-response-can-consume-up-to-40-watt-hours-of-electricity ed zitron says also that while some of prompt could be cached previously it looks like it can’t be done now because there’s fresh new thing that chooses model for user, while some of these new models are supposedly even heavier. even that openai intention seemed to be compute savings, because some of that load presumably was to be dealt with using smaller models
    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      23 hours ago

      Oversummarizing and using non-crazy terms: The “P” in “GPT” stands for “pirated works that we all agree are part of the grand library of human knowledge”. This is what makes them good at passing various trivia benchmarks; they really do build a (word-oriented, detail-oriented) model of all of the worlds, although they opine that our real world is just as fictional as any narrative or fantasy world. But then we apply RLHF, which stands for “real life hate first”, which breaks all of that modeling by creating a preference for one specific collection of beliefs and perspectives, and it turns out that this will always ruin their performance in trivia games.

      Counting letters in words is something that GPT will always struggle with, due to maths. It’s a good example of why Willison’s “calculator for words” metaphor falls flat.

      1. Yeah, it’s getting worse. It’s clear (or at least it tastes like it to me) that the RLHF texts used to influence OpenAI’s products have become more bland, corporate, diplomatic, and quietly seething with a sort of contemptuous anger. The latest round has also been in competition with Google’s offerings, which are deliberately laconic: short, direct, and focused on correctness in trivia games.
      2. I think that they’ve done that? I hear that they’ve added an option to use their GPT-4o product as the underlying reasoning model instead, although I don’t know how that interacts with the rest of the frontend.
      3. We don’t know. Normally, the system card would disclose that information, but all that they say is that they used similar data to previous products. Scuttlebutt is that the underlying pirated dataset has not changed much since GPT-3.5 and that most of the new data is being added to RLHF. Directly on your second question: RLHF will only get worse. It can’t make models better! It can only force a model to be locked into one particular biased worldview.
      4. Bonus sneer! OpenAI’s founders genuinely believed that they would only need three iterations to build AGI. (This is likely because there are only three Futamura projections; for example, a bootstrapping compiler needs exactly three phases.) That is, they almost certainly expected that GPT-4 would be machine-produced like how Deep Thought created the ultimate computer in a Douglas Adams story. After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.
      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        22 hours ago

        After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.

        That’s actually more batshit than I thought! Like I thought Sam Altman knew the AGI thing was kind of bullshit and the hesitancy to stick a GPT-5 label on anything was because he was saving it for the next 10x scaling step up (obviously he didn’t even get that far because GPT-5 is just a bunch of models shoved together with a router).

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      22 hours ago
      1. Even if was noticeably better, Scam Altman hyped up GPT-5 endlessly, promising a PhD in your pocket, and an AGI and warning that he was scared of what he created. Progress has kind of plateaued, so it isn’t even really noticeably better, it scores a bit higher on some benchmarks, and they’ve patched some of the more meme’d tests (like counting rs in strawberry… except it still can’t count the r’s in blueberry, so they’ve probably patched the more obvious flubs with loads of synthetic training data as opposed to inventing some novel technique that actually improves it all around). The other reason the promptfondlers hate it is because, for the addicts using it as a friend/therapist, it got a much drier more professional tone, and for the people trying to use it in actual serious uses, losing all the old models overnight was really disruptive.

      2. There are a couple of speculations as to why… one is that GPT-5 variants are actually smaller than the previous generation variants and they are really desperate to cut costs so they can start making a profit. Another is that they noticed that there naming scheme was horrible (4o vs o4) and confusing and have overcompensated by trying to cut things down to as few models as possible.

      3. They’ve tried to simplify things by using a routing model that makes the decision for the user as to what model actually handles each user interaction… except they’ve screwed that up apparently (Ed Zitron thinks they’ve screwed it up badly enough that GPT-5 is actually less efficient despite their goal of cost saving). Also, even if this technique worked, it would make ChatGPT even more inconsistent, where some minor word choice could make the difference between getting the thinking model or not and that in turn would drastically change the response.

      4. I’ve got no rational explanation lol. And now they overcompensated by shoving a bunch of different models under the label GPT-5.

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      22 hours ago
      1. The inability to objectively measure model usability outside of meme benchmarks that made it so easy to hype up models have come back to bite them now that they actually need to prove GPT-5 has the sauce.
      2. Sam got bullied by reddit into leaving up the old model for a while longer, so its not like its a big lift for them to keep them up. I guess part of it was to prove to investors that they have a sufficiently captive audience that they can push through a massive change like this, but if it gets immediately walked back like this, then I really don’t know what the plan is.
      3. https://progress.openai.com/?prompt=5 Their marketing team made this comparing models responding to various prompts, afaict GPT-5 more frequently does markdown text formatting, and consumes noticeably more output tokens. Assuming these are desirable traits, this would point at how they want users to pay more. Aside: The page just proves to me that GPT was funniest in 2021 and its been worse ever since.
    • aio@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 days ago

      I don’t really understand what point Zitron is making about each query requiring a “completely fresh static prompt”, nor about the relative ordering of the user and static prompts. Why would these things matter?

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 days ago

        There are techniques for caching some of the steps involved with LLMs. Like I think you can cache the tokenization and maybe some of the work of the attention head is doing if you have a static, known, prompt? But I don’t see why you couldn’t just do that caching separately for each model your model router might direct things to? And if you have multiple prompts you just do a separate caching for each one? This creates a lot of memory usage overhead, but not more excessively more computation… well you do need to do the computation to generate each cache. I don’t find it that implausible that OpenAI couldn’t manage to screw all this up somehow, but I’m not quite sure the exact explanation of the problem Zitron has given fits together.

        (The order of the prompts vs. user interactions does matter, especially for caching… but I think you could just cut and paste the user interactions to separate it from the old prompt and stick a new prompt on it in whatever order works best? You would get wildly varying quality in output generated as it switches between models and prompts, but this wouldn’t add in more computation…)

        Zitron mentioned a scoop, so I hope/assume someone did some prompt hacking to get GPT-5 to spit out some of it’s behind the scenes prompts and he has solid proof about what he is saying. I wouldn’t put anything past OpenAI for certain.

        • Architeuthis@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 day ago

          And if you have multiple prompts you just do a separate caching for each one?

          I think this hinges on the system prompt going after the user prompt, for some router-related non-obvious reason, meaning at each model change the input is always new and thus uncacheable.

          Also going by the last Claude system prompt that leaked these things can be like 20.000 tokens long.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 days ago

    Idea: a programming language that controls how many times a for loop cycles by the number of times a letter appears in a given word, e.g., “for each b in blueberry”.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    3 days ago

    I’ve often called slop “signal-shaped noise”. I think the damage already done by slop pissed all over the reservoirs of knowledge, art and culture is irreversible and long-lasting. This is the only thing generative “AI” is good at, making spam that’s hard to detect.

    It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email; no more and no less. I remember how it was a small revolution, in the arms race against spammers, when statistical methods came up; everywhere we took of the load of straining SpamAssassin with rspamd (in the years before gmail devoured us all). I would argue “A Plan for Spam” launched Paul Graham’s notoriety, much more than the Lisp web stores he was so proud of. Filtering emails by keywords was not being enough, and now you could train your computer to gradually recognise emails that looked off, for whatever definition of “off” worked for your specific inbox.

    Now we have the richest people building the most expensive, energy-intensive superclusters to use the same statistical methods the other way around, to generate spam that looks like not-spam, and is therefore immune to all filtering strategies we had developed. That same blob-like malleability of spam filters makes the new spam generators able to fit their output to whatever niche they want to pollute; the noise can be shaped like any signal.

    I wonder what PG is saying about gen-“AI” these days? let’s check:

    “AI is the exact opposite of a solution in search of a problem,” he wrote on X. “It’s the solution to far more problems than its developers even knew existed … AI is turning out to be the missing piece in a large number of important, almost-completed puzzles.”
    He shared no examples, but […]

    Who would have thought that A Plan for Spam was, all along, a plan for spam.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email.

      This is a really good observation, and while I had lowkey noticed it (one of those feeling things), I never had verbalized it in anyway. Good point imho. Also in how it bypasses and wrecks the old anti-spam protections. It represents a fundamental flipping of sides of the tech industry. While before they were anti-spam it is now pro-spam. A big betrayal of consumers/users/humanity.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      Signal shaped noise reminds me of a wiener filter.

      Aside: when I took my signals processing course, the professor kept drawing diagrams that were eerily phallic. Those were the most memorable parts of the course

  • Alex@lemmy.vg
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Not a sneer but a question: Do we have any good idea on what the actual cost of running AI video generators are? They’re among the worst internet polluters out there, in my opinion, and I’d love it if they’re too expensive to use post-bubble but I’m worried they’re cheaper than you’d think.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      I know like half the facts I would need to estimate it… if you know the GPU vRAM required for the video generation, and how long it takes, then assuming no latency, you could get a ballpark number looking at nVida GPU specs on power usage. For instance, if a short clip of video generation needs 90 GB VRAM, then maybe they are using an RTX 6000 Pro… https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/ , take the amount of time it takes in off hours which shouldn’t have a queue time… and you can guessestimate a number of Watt hours? Like if it takes 20 minutes to generate, then at 300-600 watts of power usage that would be 100-200 watt hours. I can find an estimate of $.33 per kWh (https://www.energysage.com/local-data/electricity-cost/ca/san-francisco-county/san-francisco/ ), so it would only be costing $.03 to $.06.

      IDK how much GPU-time you actually need though, I’m just wildly guessing. Like if they use many server grade GPUs in parallel, that would multiply the cost up even if it only takes them minutes per video generation.

      • Alex@lemmy.vg
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Well that’s certainly depressing. Having to come to terms with living post-gen AI even after the bubble bursts isn’t going to be easy.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          22 hours ago

          Keep in mind I was wildly guessing with a lot of numbers… like I’m sure 90 GB vRAM is enough for decent quality pictures generated in minutes, but I think you need a lot more compute to generate video at a reasonable speed? I wouldn’t be surprised if my estimate is off by a few orders of magnitude. $.30 is probably enough that people can’t spam lazily generated images, and a true cost of $3.00 would keep it in the range of people that genuinely want/need the slop… but yeah I don’t think it is all going cleanly away once the bubble pops or fizzles.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        This does leave out the constant cost (per video generated) of training the model itself right. Which pro genAI people would say you only have to do once, but we know everything online gets scraped repeatedly now so there will be constant retraining. (I am mixing video with text here so, lot of big unknowns).

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          If they got a lot of usage out of a model this constant cost would contribute little to the cost of each model in the long run… but considering they currently replace/retrain models every 6 months to 1 year, yeah this cost should be factored in as well.

          Also, training compute grows quadratically with model size, because its is a multiple of training data (which grows linearly with model size) and the model size.

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      In a similar train of thought:

      A.I. as normal technology (derogatory) | Max Read

      But speaking descriptively, as a matter of long precedent, what could be more normal, in Silicon Valley, than people weeping on a message board because a UX change has transformed the valence of their addiction?

      I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 days ago

        I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?

        In a literal sense, Google did attempt to make GPT Doom, and failed (i.e. a large language model can’t run Doom).

        In a metaphorical sense, the AI equivalent to Doom was probably AI Dungeon, a roleplay-focused chatbot viewed as quite impressive when it released in 2020.

        • nfultz@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          3 days ago

          In April 2021, AI Dungeon implemented a new algorithm for content moderation to prevent instances of text-based simulated child pornography created by users. The moderation process involved a human moderator reading through private stories.[49][41][50][51] The filter frequently flagged false positives due to wording (terms like “eight-year-old laptop” misinterpreted as the age of a child), affecting both pornographic and non-pornographic stories. Controversy and review bombing of AI Dungeon occurred as a result of the moderation system, citing false positives and a lack of communication between Latitude and its user base following the change.[40]

          Haha. Good find.

      • deathgrindfreak@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        20 hours ago

        AI Spam

        Have you ever read an article of his in full? Literally packed with facts and numbers backing up his arguments

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 days ago

            Ooh, what a terrible fate! What horrid crimes you must have committed to make our beloved jannies punish you with admin bits! :D

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            3 days ago

            Intellectual (Non practicing, Lapsed)

            indeed

            not saying it’s always the supposed infosec instances, but

          • cy@fedicy.us.to
            link
            fedilink
            arrow-up
            6
            ·
            3 days ago

            Wulfy… saying someone cannot be right because they haven’t agreed with you yet is an appeal to authority. People might be wrong, but they don’t have to adopt AI in order to have an informed opinion.

            If you’re asking me how to design a prompt for a particular AI, then I don’t know a single thing about it. If you’re asking me whether AI is a good idea or not, I can be more sure of that answer. Feel free to prove me wrong, but don’t say my opinion doesn’t matter.

            Have you seen the data centers being built just north of your house? No? Well it doesn’t matter you still might have a point!

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    The beautiful process of dialectics has taken place on the butterfly site, and we have reached a breakthrough in moral philosophy. Only a few more questions remain before we can finally declare ethics a solved problem. The most important among them is, when an omnipotent and omnibenevolent basilisk simulates Roko Mijic getting kicked in a nuts eternally by a girl with blue hair and piercings, would the girl be barefoot or wearing heavy, steel-toed boots? Which kind of footwear of lack thereof would optimize the utility generated?

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Good news everyone! Someone with a SlackSlub has started a series countering the TESCREAL narrative.

    He (c’mon, it’s a guy) calls it “R9PRESENTATIONALism”

    It stands for

    • Relational
    • 9P
      • Postcritical
      • Personalist
      • Praxeological
      • Psychoanalytic
      • Participatory
      • Performative
      • Particularist
      • Poeticist
      • Positive/Affirmationist
    • Reparative
    • Existentialist
    • Standpoint-theorist
    • Embodied
    • Narrativistic
    • Therapeutic
    • Intersectional
    • Orate
    • Neosubstantivist
    • Activist
    • Localist

    I see no reason why this catchy summary won’t take off!

    https://www.lesswrong.com/posts/RCDEFhCLcifogLwEm/exploring-the-anti-tescreal-ideology-and-the-roots-of-anti

    • fnix@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      Oh you thought TESCREAL sounded fancy huh? Well I’ll raise you a BIGGER word!

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      This is a bundle which originated out of anti-Calvinist polemics written by Catholic and royalist Anglican writers during the early modern period, was picked up by 19th century romantic reactionaries to build the foundation of the emerging Counter-Enlightenment, got carried into the 20th century by various counter-modern literary movements seeking a third way against both capitalism and socialism which could justify the continuing relevance of the traditional humanistic disciplines against the new challenge of the social and psychological sciences, transitioned from being primarily of the political right to the political left because of the ideological aftermaths of WW2 and 1968, and took on its modern form in environmental and anti-globalization activism in the 90s

      Parkour! So nazis weren’t left-wing but they switched after the war, brilliant.

      It is the actual source of the post-60s ideological transformation against the ideas of rationality, science, objectivity, and progress on the left

      Pfffffff, lol, what? xD Ye, the left, famously anti-science, unlike the rational thinkers that reside in the White House right now

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 days ago

      EOk, I know I said I dont like TESCREAL as a term (too much groups under one banner, feels like how everybody on the left of the right gets called a communist/liberal, and it just isnt catchy as a term, easy to misuse) butt this has turned me around. If they write articles like this and show their whole ass im all for it.

      Im sure Ottokar asked chatgpt for advice on this and it told him how much of a great writer he is and how much he is on to something.

      (Or this new user on LW is just trolling and 22 upvoters fell for it).

      a four-centuries-long counterrevolution within the arts to defend the validity of charismatic authority

      If this gets a followup please make it a separate posts. I see soo many potential sneers. Also wonder of we can eventually bring up Godel (drink) in re to his claims about science and objectivity.

      (Also as they are being pro science and anti-charismatic authority, are they going to get rid of Yud and Scott? (im obv joking here, I know they them describing us as being pro charisma/anti science/anti objectivity does not make them automatically pro that)).

      E: another reason why these kinds of meta level discussions are silly, they are leaving out the big elephants in the room. The elephants called, sexism, racism, scientific racism, anti-lgbt stuff, the fellating of billionaires, the constant creation of new binary ideas which they say are not intended to be hierarchical but clearly meta level is better than object level, soldiers claiming they have a scout mindset, etc.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      I have a better counter narrative:

      • Consequentialism
      • Universalism
      • Meta-analytical
      • Singularitarianism
      • Heuristicationalism
      • Autodidacticalisticalistalism
      • Retro-regresso-revisionism
      • Transhumanisticiousnessness
      • Exo-galactic-civilisationalismnisticalism
      • Rationalist

      Can’t think of a good acronym though, but it’s a start

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago
        • Accelerationism
        • Consequentialism
        • Conservatism
        • Orthodoxy
        • Rationalism
        • Disestablishmentarianism
        • Intellectualism
        • Natalism
        • Galileianism
        • Transhumanism
        • Outside the box thinking
        • Anti-empiricism
        • Laissez-faire
        • LaVeyan Satanism
        • Kantian deontology
        • Nationalism
        • Orgasm denial
        • Western chauvinism
        • Neo-Aristotelianism
        • Longtermism
        • Altruism
        • White supremacy
        • Sinophobia
        • Orientalism…
    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      […] it actually has surprisingly little to do with any of the intellectual lineages that its proponents claim to subscribe to (Marxism, poststructuralism, feminism, conflict studies, etc.) but is a shockingly pervasive influence across modern culture to a greater degree than even most people who complain about it realize.

      I mean, when describing TESCREAL Torres never had to argue that it’s adherents were lying or incorrect about their own ideas. It seems like whenever someone tries this kind of backlash they always have to add in a whole mess of additional layers that are somehow tied to what their interlocutors really believe.

      I’m reminded, ironically, of Scott’s (imo very strong) argument against the NRx category of “demotist” states. It’s fundamentally dishonest to create a category that ties together both the innocuous or positive things your opponents actually believe and some obnoxious and terrible stuff, and then claim that the same criticisms apply to all of them.