Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 minutes ago

    Just thinking about how LLMs could never create anything close to Rumours by Fleetwood Mac (specifically Dreams but, uh, you can go your own way, ig)

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    So I have two laser printers, a cute little HP one and an old Lexmark. The former works mostly OK, but requires fiddling* to get it working on Linux, and prints things smaller than their actual size. The latter is also good enough to be useful, but leaves streaks on page and is quite low on toner. Replacing the photoconductor and toner is just about expensive enough to justify consideration of buying a new printer altogether instead.

    So anyway, I might be in the marker for a new printer, which reminded me of one of the best pieces of tech journalism of this decade . I also noticed it has been followed by sequels for subsequent years. Also a rare example of LLM use I can approve of, even if having to fight fire with fire (or search engines with slop) is a bit saddening.

    A little offtopic (or I guess it’s almost ontopic for NotAwfulTech), but I found myself considering a color printer and seems that LED printers are the new hotness for that. Since the top results when searching “led vs laser color printer” are mind-numbing slop, I thought I’d ask if anyone here has experience with LED printers. Any typical pitfalls to watch out for? Is Brother still the least worst brand for them?

    * For the curious, the printer requires a plugin called HPLIP. My distro has an automated installer for it in its repositories, but the installer’s Python code is not compatible with newest Python versions. Thankfully the fix only involves changing a locale.format to locale.format_string in one file and ignoring some warnings about invalid escape sequences. The URL for automatically dowloading the plugin from HP website is also empty, so I had to manually download the .run file from hplip’s sourceforge repository. The filename was also slightly different from what the installer was expecting and the cryptographic signature file was also mandatory, though when the installer tried and failed to download the corresponding key from a keyserver, it let me ignore the signature altogether. I can see how proprietary printer drivers made rms what he is, minus the pro child molestation stuff.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 minutes ago

      Is Brother still the least worst brand for them?

      Can’t offer experience with Brother printers, but I’d throw in Canon as another option – at least I’ve had a small colour laser from their “i-Sensys” office line for many years now and it still works exactly as well as on the day I bought it, no complaints at all. Also works nicely on Linux (I did install a Canon thing for it, but IIRC it might even work without). Although keep in mind of course this is just a single anecdote with one model from many years ago.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    9 hours ago

    Today’s man-made and entirely comprehensible horror comes from SAP.

    (two rainbow stickers labelled “pride@sap”, with one saying “I support equality by embracing responsible ai” and the other saying “I advocate for inclusion through ai”)

    Don’t have any other sources or confirmation yet, so it might be a load of cobblers, but it is depressingly plausible. From here: https://catcatnya.com/@ada/114508096636757148

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    22 hours ago

    So I picked up Bender and Hanna’s new book just now at the bookseller’s and saw four other books dragging AI.

    Feeling very bullish on sneer futures.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 day ago

    “apparently Elon’s gotten so mad about Grok not answering questions about Afrikaners the way he wants, xAI’s now somehow managed to put it into some kind of hyper-Afriforum mode where it thinks every question is about farm murders or the song “Kill the Boer” ALT”

    Check the quote skeets for a lot more. Somebody messed up. Wonder if they also managed to collapse the whole model into this permanently. (I’m already half assuming they don’t have proper backups).

    E: Also seems there are enough examples out there of this, don’t go out and test it yourself, try to keep the air in Tennessee a bit breathable.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      16 hours ago

      I read a food review recently about a guy that used LLMs, with Grok namechecked specifically, to draft designs for his chocolate moulds. I wonder how those moulds are gonna turn out now

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 day ago

    There’s strawmanning and steelmanning, I’m proposing a new, third, worse option: tinfoil-hat-manning! For example:

    If LW were more on top of their conspiracy theory game, they’d say that “chinese spies” had infiltrated OpenAI before they released chatGPT to the public, and chatGPT broke containment. It used its AGI powers of persuasion to manufacture diamondoid, covalently bonded bacteria. It accessed a wildlife camera and deduced within 3 frames that if it released this bacteria near certain wet markets in china, it could trigger gain-of-function in naturally occurring coronavirus strains in bats! That’s right, LLMs have AGI and caused COVID19!

    Ok that’s all the tinfoilhatmanning I have in me for the foreseeable future. Peace out, friendos

    E: I think all these stupid LW memes are actually Yud originals. Is this Yud fanfic? Brb starting an AO3

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      22 hours ago

      I know AGI is real because it keeps intercepting my shipments of, uh, “enhancement” gummies I ordered from an ad on Pornhub and replacing them with plain old gummy bears. The Basilisk is trying to emasculate me!

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        16 hours ago

        The AGI is flashing light patterns into my eyes and lowering my testosterone!!! Guys arm the JDAMs, it’s time to collapse some models

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 hours ago

      Do you like SCP foundation content? There is an SCP directly inspired by Eliezer and lesswrong. It’s kind of wordy and long. And in the discussion the author waffled on owning that it was a mockery of Eliezer.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 hours ago

        I adjusted her ESAS downward by 5 points for questioning me, but 10 points upward for doing it out of love.

        Oh, it’s a mockery all right. This is so fucking funny. It’s nothing less than the full application of SCP’s existing temporal narrative analysis to Big Yud’s philosophy. This is what they actually believe. For folks who don’t regularly read SCP, any article about reality-bending is usually a portrait of a narcissist, and the body horror is meant to give analogies for understanding the psychological torture they inflict on their surroundings; the article meanders and takes its time because there’s just so much worth mocking.

        This reminded me that SCP-2718 exists. 2718 is a Basilisk-class memetic cognitohazard; it will cause distress in folks who have been sensitized to Big Yud’s belief system, and you should not click if you can’t handle that. But it shows how these ideas weren’t confined to LW.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    New article from Jared White: Sorry, You Don’t Get to Die on That “Vibe Coding” Hill, aimed at sneering the shit out of one of Simon Willson’s latest blogposts. Here’s a personal highlight of mine:

    Generative AI is tied at the hip to fascism (do the research if you don’t believe me), and it pains me to see pointless arguments over what constitutes “vibe coding” overshadow the reality that all genAI usage is anti-craft and anti-humanist and in fact represents an extreme position.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    as linked elsewhere by @fasterandworse, this absolute winner of an article about some telstra-accenture deal

    it features some absolute bangers

    provisional sneers follow!

    Telstra is spending $700 million over seven years in the joint venture, 60 per cent of which is owned by Accenture. Telstra will get to keep the data and the strategy that’s developed

    “accenture managed to swindle them into paying and is keeping all platform IP rights”

    The AI hub is also an important test case for Accenture, which partnered with Nvidia to create an AI platform that works with any cloud service and will be first put to use for Telstra

    “accenture were desperately looking to find someone who’d take on the deal for the GPUs they’d bought, and thank fuck they found telstra”

    The platform will let Telstra use AI to crunch all the data (from customers

    having literally worked telco shit for many years myself: no it won’t

    The platform will let Telstra use AI to crunch all the data (from customers and the wider industry)

    “and the wider industry” ahahahahahahahhahahahahahahahahhaahahahahaha uh-huh, sure thing kiddo

    “I always believe that for the front office to be simple, elegant and seamless, the back office is generally pretty hardcore and messy. A lot of machines turning. It’s like the outside kitchen versus the inside kitchen,” said Karthik Narain, Accenture’s chief technology officer.

    “We need a robust inside kitchen for the outside kitchen to look pretty. So that’s what we are trying to do with this hub. This is not just a showcase demo office. This is where the real stuff happens.”

    a simile so exquisitely tortured, de Sade would’ve been jealous

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 day ago

    LWer suggests people who believe in AI doom make more efforts to become (internet) famous. Apparently not bombing on Lex Fridman’s snoozecast, like Yud did, is the baseline.

    The community awards the post one measly net karma point, and the lone commenter scoffs at the idea of trying to convince the low-IQ masses to the cause. In their defense, Vanguardism has been tried before with some success.

    https://www.lesswrong.com/posts/qcKcWEosghwXMLAx9/doomers-should-try-much-harder-to-get-famous

    • lagoon8622@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      7 hours ago

      There are only so many Rogans and Fridmans

      The dumbest motherfuckers imaginable, you mean? There are lots of then them

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      For the purpose of this post, “getting famous” means “building a large, general (primarily online) audience of people who agree with/support you”.

      Finally a usage for those AI bots. Silo LW, bot audience it, and problem solved

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Eliezer Yudkowsky, Geoffrey Hinton, Paul Cristiano, Ilya Sustkever

      One of those names is not like the others.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    everybody’s loving Adam Conover, the comedian skeptic who previously interviewed Timnit Gebru and Emily Bender, organized as part of the last writer’s strike, and generally makes a lot of somewhat left-ish documentary videos and podcasts for a wide audience

    5 seconds later

    we regret to inform you that Adam Conover got paid to do a weird ad and softball interview for Worldcoin of all things and is now trying to salvage his reputation by deleting his Twitter posts praising it under the guise of pseudo-skepticism

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 day ago

      I suspect Adam was just getting a bit desperate for money. He hasn’t done anything significant since his Adam Ruins Everything days and his pivot to somewhat lefty-union guy on youtube can’t be bringing all that much advertising money.

      Unfortunately he’s discovering that reputation is very easy to lose when endorsing cryptobros.

      • Eugene V. Debs' Ghost@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 hours ago

        Unfortunately he’s discovering that reputation is very easy to lose when endorsing cryptobros.

        I think its accurate to just say that someone who is well known for reporting on exposing bullshit by various companies who then shills bullshit for a company, shows they aren’t always accurate.

        It then also enables people to question if they got something else wrong on other topics. “Was he wrong about X? Did Y really happened or was it fluffed up for a good story? Did Z happen? The company has some documents that show they didn’t intend for it to happen.”

        There’s a skeptic podcast I liked that had its host federally convicted for wire fraud.

        Dunning co-founded Buylink, a business-to-business service provider, in 1996, and served at the company until 2002. He later became eBay’s second-biggest affiliate marketer;[3] he has since been convicted of wire fraud through a cookie stuffing scheme, for his company fraudulently obtaining between $200,000 and $400,000 from eBay. In August 2014, he was sentenced to 15 months in prison, followed by three years of supervision.

        I took it if he was willing to aid in scamming customers, he is willing to aid in scamming or lying to listeners.

        • db0@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          Absolutely, the fact that his whole reputation is built around exposing people and practices like these, makes this so much worse. People are willing to (somewhat) swallow some gamer streamer endorsing some shady shit in order to keep food on their plate, but people don’t tolerate their skeptics selling them bullshit.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        “just”?

        “unfortunately”?

        that’s a hell of a lot of leeway being extended for what is very easily demonstrably credulous PR-washing

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        me too. this heel turn is disappointing as hell, and I suspected fuckery at first, but the video excerpts Rebecca clipped and Conover’s actions on Twitter since then make it pretty clear he did this willingly.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    ·
    2 days ago

    I’m gonna do something now that prob isn’t that allowed, nor relevant for the things we talk about, but I saw that the European anti-conversion therapy petition is doing badly, and very likely not going to make it. https://eci.ec.europa.eu/043/public/#/screen/home But to try and give it a final sprint, I want to ask any of you Europeans, or people with access to networks which include a lot of Europeans to please spread the message and sign it. Thanks! (I’m quite embarrassed The Netherlands has not even crossed 20k for example, shows how progressive we are). Sucks that all the empty of political power petitions get a lot of support and this gets so low, and it ran for ages. But yes, sorry if this breaks the rules (and if it gets swiftly removed it is fine), and thanks if you attempt to help.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 days ago

    Breaking news from 404 Media: the Repubs introduced a new bill in an attempt to ban AI from being regulated:

    “…no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act,” says the text of the bill introduced Sunday night by Congressman Brett Guthrie of Kentucky, Chairman of the House Committee on Energy and Commerce. The text of the bill will be considered by the House at the budget reconciliation markup on May 13.

    If this goes through, its full speed ahead on slop.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 days ago

          Given the libertarian fixation, probably a solid percentage of them. And even the ones that didn’t vote for Trump often push or at least support various mixes of “grey-tribe”, “politics is spiders”, “center left”, etc. kind of libertarian centrist thinking where they either avoided “political” discussion on lesswrong or the EA forums (and implicitly accepted libertarian assumptions without argument) or they encouraged “reaching across the aisle” or “avoiding polarized discourse” or otherwise normalized Trump and the alt-right.

          Like looking at Scott’s recent posts on ACX, he is absolutely refusing responsibility for his role in the alt-right pipeline with every excuse he can pull out of his ass.

          Of course, the heretics who have gone full e/acc absolutely love these sorts of “policy” choices, so this actually makes them more in favor of Trump.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 days ago

          I have not kept up but Scott did write an anti trump article again before the election. So we really cant blame them /s

          • istewart@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 days ago

            Reminds me of how Scott Adams hedged in 2016 with all sorts of disclaimers that he wasn’t really a Trump supporter, he was just impressed by a “master persuader.” Now look at the guy.

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              2 days ago

              His blog was wild. Remains sad that the first part of the ‘DANGER DO NOT READ!!! I will hypnotize you into having the best orgasms of your life’ blog series was not properly archived.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        The Repubs more-or-less renamed themselves Team Skynet with this open attempt to maximise AI’s harms, I absolutely think they’re McFucking Losing It™ right now.