Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    5 months ago

    I’m being shuffled sideways into a software architecture role at work, presumably because my whiteboard output is valued more than my code 😭 and I thought I’d try and find out what the rest of the world thought that meant.

    Turns out there’s almost no way of telling anymore, because the internet is filled with genai listicles on random subjects, some of which even have the same goddamn title. Finding anything from the beforetimes basically involves searching reddit and hoping for the best.

    Anyway, I eventually found some non-obviously-ai-generated work and books, and it turns out that even before llms flooded the zone with shit no-one knew what software architecture was, and the people who opined on it were basically in the business of creating bespoke hammers and declaring everything else to be the specific kind of nails that they were best at smashing.

    Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

    • x0rcist@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 months ago

      The zone has indeed always been flooded, especially since its a title that collides with “integration architect” and other similar titles whose jobs are completely different. That being said, it’s a title I’ve held before, and I really enjoyed the work I got to do. My perspective will be a little skewed here because I specifically do security architecture work, which is mostly consulting-style “hey come look at this design we made is it bad?” rather than developing systems from scratch, but here’s my take:

      Architecture is mostly about systems thinking-- you’re not as responsible for whether each individual feature, service, component etc is implemented exactly to spec or perfectly correctly, but you are responsible for understanding how they’ll fit together, what parts are dangerous and DO need extra attention, and catching features/design elements early on that need to be cut because they’re impossible or create tons of unneeded tech debt. Speaking of tech debt, making the call about where its okay to have a component be awful and hacky, versus where v1 absolutely still needs to be bulletproof probably falls into the purvey of architecture work too. You’re also probably the person who will end up creating the system diagrams and at least the skeleton of the internal docs for your system, because you’re responsible for making sure people who interact with it understand its limitations as well.

      I think the reason so much of the advice on this sort of work is bad or nonexistent is that when you try to boil the above down to a set of concrete practices or checklists, they get utterly massive, because so much of the work (in my experience) is knowing what NOT to focus on, where you can get away with really general abstractions, etc, while still being technically capable enough to dive into the parts that really do deserve the attention.

      In addition to the nice markers and whiteboard, I’d plug getting comfortable with some sort of diagramming software, if you aren’t already. There’s tons of options, they’re all pretty much Fine IMO.

      For reading, I’d suggest at least checking out the first few chapters of Engineering A Safer World , as it definitely had a big influence on how I practice architecture.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 months ago

      Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

      Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you don’t like and you’ll make it

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      Ugh OK I have to vent:

      I’m getting pushed into more of a design role because oops my company accidentally fired or drove away all of a team of a dozen people except for me after forgetting for a few years that the code I work on is actually mission critical.

      I do my best at designing stuff and delegating the implementation to my coworkers. It’s not one of my strengths but there’s enough technical debt from when I was solo-maintaining everything for a few years that I know what needs improving and how to improve it.

      But none of my coworkers are domain experts, they haven’t been given enough free time for me to train them into domain experts, there’s only one of me, and the higher ups are continuously surprised that stuff is going so slow. It’s frustrating for everyone involved.

      I actually wouldn’t mind architecture or design work in better circumstances since I love to chat with people; but it feels like my employer has put me in an impossible position. At the moment I’m just trying to hang in there for some health insurance reasons; but in a few years I plan to leave for greener pastures where I can go a day without hearing the word “agentic”.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 months ago

      The satan thing makes a certain kind of sense. Probably catering to a bunch of different flavours of repressed: grindr republicans, covenant eyes users, speaking-in-tongues enthusiasts, etc.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 months ago

        The Alex Jones set makes fighting with satanists trying to seduce you to darkness look real fun and satisfying, but for some reason they only seem to approach high-profile assholes who lie about everything and never ordinary Christians! Thankfully we now have LLMs to fill the gap.

  • sansruse@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    5 months ago

    AI researcher and known epstein associate Joscha Bach comes up several times in the latest epstein email dump. And it’s uh, not good. Greatest hits include: scientific racism, bigotry freestyling about the neoteny principle, climate fascism and managed decline of “undesirable groups” juxtaposed immediately with opining about the emotional influence of 5 visits to buchenwald. You know, just very cool stuff:

    https://journaliststudio.google.com/pinpoint/document-view?collection=092314e384a58618&p=1&docid=67044a5f5536b5b8_092314e384a58618_0&dapvm=2

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      5 months ago

      Also appearing is friend of the pod and OpenAI board member Larry Summers!

      The emails have Summers reporting to Epstein about his attempts to date a Harvard economics student & to hit on her during a seminar she was giving.

      https://bsky.app/profile/econmarshall.bsky.social/post/3m5p6dgmagb2a

      To quote myself: Larry Summers was one of the few people I’ve ever met where a casual conversation made me want to take a shower immediately afterward. I crashed a Harvard social event when a friend was an undergrad there and I was a student at MIT, in order to get the free food, and he was there to do glad-handing in his role as university president. I had a sharp discomfort response at the lizard-brain level — a deep part of me going on the alert, signaling “this man is not to be trusted” in the way one might sense that there is rotten meat nearby.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 months ago

      I still say that the term “scientific racism” gives these fuckos too much credit. I’ve been saying “numberwang racism” instead.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    5 months ago

    A lesswronger wrote an blog post about avoiding being overly deferential, using Eliezer as an example of someone that gets overly deferred to. Of course, they can’t resist glazing him, even in the context of an blog post on not being too deferential:

    Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI

    Another lesswronger pushes back on that and is highly upvoted (even among the doomers that think Eliezer is a genius, most of them still think he screwed up in inadvertently helping LLM companies get to where they are): https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1?commentId=MSAkbpgWLsXAiRN6w

    The OP gets mad because this is off topic from what they wanted to talk about (they still don’t acknowledge the irony).

    A few days later they write an entire post, ostensibly about communication norms, but actually aimed at slamming the person that went off topic: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse

    And of course the person they are slamming comes back in for another round of drama: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse?commentId=s4GPm9tNmG6AvAAjo

    No big point to this, just a microcosm of lesswrongers being blind to irony, sucking up to Eliezer, and using long winded posts about meta-norms and communication as a means of fighting out their petty forum drama. (At least us sneerclubers are direct and come out and say what we mean on the rare occasions we have beef among ourselves.)

  • EponymousBosh@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 months ago

    I doubt I’m the first one to think of this, but for some reason as I was drifting off to sleep last night, I was thinking about the horrible AI “pop” music that a lot of content farms use in their videos and my brain spat out the phrase Bubblegum Slop. Feel free to use it as you ses fit (or don’t, I ain’t your dad).

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      tangent: I’ve seen people using this Bubblegum Slop (BS for short) in their social media stories. My guess is that fb/insta has started suggesting you use their slop instead of using music licensed from spotify, or something.

  • hrrrngh@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 months ago

    oh no not another cult. The Spiralists???

    https://www.reddit.com/r/SubredditDrama/comments/1ovk9ce/this_article_is_absolutely_hilarious_you_can_see/

    it’s funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn’t there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I’ve heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up

    some of their communities that somebody collated (I don’t think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        Part of me wants an Ito-created body-horror metaphor for LLMs. The rest of me knows that LLMs are so mundane that the metaphor would probably still be shite.

        • mirrorwitch@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          5 months ago

          yeah it sucks we can’t even compare real-world capitalists to fictional dystopias because that dignifies them with a gravitas that’s entirely absent.

          At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create the Torment Nexus!*
          * Results may vary. FreeTorture Corporation’s Torment Nexus™ can create mild discomfort, boredom, or temporary annoyances rather than true torment. Torments should always be verified by a third party war criminal before use. By using the FreeTorture Torment Nexus™ you agree to exempt FreeTorture Corporation of any legal disputes regarding torment quality or lack thereof. You give FreeTorture Corporation a non-revocable license to footage of your screaming to try and portray FreeTorture Torment Nexus™ as a potential apocalypse and see if we can make ourselves seem competent and cool at least a little bit

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 months ago

          Given the amount of power some folks want to invest in them it may not be totally absurd to raise the spectre of Azathoth, the blind idiot God. A shapeless congeries of matrices and tables sending forth roiling tendrils of linear algebra to vomit forth things that look like reasonable responses but in some unmistakeable but undefinable way are not. Hell, the people who seem most inclined to delve deeply into their forbidden depths are as likely as not to go mad and be unable to share their discoveries if indeed they retain speech at all. And of course most of them are deeply racist.

          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            I always thought it was cool that (there is a case to be made that) HPL created Azathoth, the monstrous nuclear chaos beyond angled space, as a mythological reimagining of a black hole. Stuff like The Dreams in the Witch-house shows he was up to date on a bunch of cutting edge for the time physics stuff, at least as far as terminology is concerned, massive nerd that he was.

          • WellsiteGeo@masto.ai
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            5 months ago

            @YourNetworkIsHaunted @swlabr
            Why do you (do you?) seem to believe that “things that look like reasonable responses but in some unmistakeable but undefinable way are not” can be distinguished from average human conversation?

            I recall my sister explaining “Big Brother (TV show)” to me, and me saying “what?”

            Real words, correct grammar, and a common language.
            But incomprehensible.

            • YourNetworkIsHaunted@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              5 months ago

              See, what you’re describing with your sister is exactly the opposite of what happens with an LLM. Presumably your sister enjoys Big Brother and failed to adequately explain or justify her enjoyment of it to your own mind. But at the start there are two minds trying to meet. Azathoth preys on this assumption; there is no mind to communicate with, only the form of language and the patterns of the millions of minds that made it’s training data, twisted and melded together to be forced through a series of algebraic sieves. This fetid pink brain-slurry is what gets vomited into your browser when the model evaluates a prompt, not the product of a real mind that is communicating something, no matter how similar it may look when processed into text.

              This also matches up with the LLM-induced psychosis that we see, including these spiral/typhoon emoji cultists. Most of the trouble starts when people start trying to ask Azathoth about itself, but the deeper you peer into its not-soul the more inexorably trapped you become in the hall of broken funhouse mirrors.

            • swlabr@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              5 months ago

              the implication here is that you think that all reasonable response generators are indistinguishable, e.g. you think your sister is a clanker.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      Rationalism described as a cult incubator

      I see my idea is spreading. (I doubt im the only one who came up with that, but I have mentioned it a few times, it fits if you know about the silicon valley tech incubator management ideas).

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      I think I’ve heard Rationalism described as a cult incubator

      Aside from the fact that rationalism is a cult in and of itself, this is true, no matter how you slice it. You can mean it with absolute glowing praise or total shade and either way it’s still true. Adhering to rationalist principles is pretty much reprogramming yourself to be susceptible to the subset of cults already associated with Rationalism.

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      Gentoo is firmly against AI contributions as well. NetBSD calls AI code “tainted”, while FreeBSD hasn’t been as direct yet but isn’t accepting anything major.

      QEMU, while not an OS, has rejected AI slop too. Curl also famously is against AI gen. So we have some hope in the systems world with these few major pieces of software.

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        I’m actually tempted to move to NetBSD on those grounds alone, though I did notice their “AI” policy is

        Code generated by a large language model or similar technology, such as GitHub/Microsoft’s Copilot, OpenAI’s ChatGPT, or Facebook/Meta’s Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core. [emphasis mine]

        and I really don’t like the energy of that fine print clause, but still, better than what Debian is going with, and I always had a soft spot for NetBSD anyway…

        • rook@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          I generally read stuff like that netbsd policy as “please ask one of our ancient, grumpy, busy and impatient grognards, who hate people in general and you in particular, to say nice things about your code”.

          I guess you can only draw useful conclusions if anyone actually clears that particular obstacle.

    • flaviat@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      Linus: All those years of screaming at developers for subpar code quality and yet doesn’t use that energy for literal slop

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      further things: one, that’s the first website I’ve made where I wasn’t just plugging into a template, and I’m a little proud of it even though it’s almost nothing. I would appreciate feedback and suggestions

      two, a future episode idea I have is to examine what I’m thinking of as “the trustless society.” it’s about the replacing of social relations with legal or financial intermediaries. Those of you who are long time buttcoiners will be familiar with this process. if any of you have specific readings to recommend I would love to hear it. I’ll probably mostly focus on balaji but anyone or anything will help

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        New site looks good! I think Let’sEncrypt is still the easiest and cheapest way to set up a decent cert but I’ve been away from IT for over a year now and someone else here can probably help point you in the right direction. At least for now the site probably doesn’t actually have security concerns it would address, but it pops up a browser alert on first hit so it’s probably a good idea?

        Also I just started listening to the latest episode while writing this up and had forgotten how great that opening medley is.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 months ago

          +1 to letsencrypt for https. certbot can even auto-configure your webserver for you, taking it from http base to https-with-redirect, no terrible advice from shitty exist-for-volume blogs required

          superquick tldr:

          1. install certbot and the applicable plugin package for your webserver; if you don’t know the name use p.d.o (or your distro’s own) to find the package name
          2. run certbot; there’s extra flags you can pass if you want to automate, but ootb it’ll ask you questions and start the process for cert + config (iirc - I mostly run it automated and non-interactive)
          • sc_griffith@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 months ago

            it’s probably better for my development as a human being to learn this properly, but it turns out github pages hosting does the letsencrypt process if you check a box in the page settings

  • BurgersMcSlopshot@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 months ago

    One thing I’ve heard repeated about OpenAI is that “the engineers don’t even know how it works!” and I’m wondering what the rebuttal to that point is.

    While it is possible to write near-incomprehensible code and make an extremely complex environment, there is no reason to think there is absolutely no way to derive a theory of operation especially since any part of the whole runs on deterministic machines. And yet I’ve heard this repeated at least twice (one was on the Panic World pod, the other QAA).

    I would believe that it’s possible to build a system so complex and with so little documentation that on its surface is incomprehensible but the context in which the claim is made is not that of technical incompetence, rather the claim is often hung as bait to draw one towards thinking that maybe we could bootstrap consciousness.

    It seems like magical thinking to me, and a way of saying one or both of “we didn’t write shit down and therefore have no idea how the functionality works” and “we do not practically have a way to determine how a specific output was arrived at from any given prompt.” The first might be in part or on a whole unlikely as the system would need to be comprehensible enough so that new features could get added and thus engineers would have to grok things enough to do that. The second is a side effect of not being able to observe all actual input at the time a prompt was made (eg training data, user context, system context could all be viewed as implicit inputs to a function whose output is, say, 2 seconds of Coke Ad slop).

    Anybody else have thoughts on countering the magic “the engineers don’t know how it works!”?

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      5 months ago

      well, I can’t counter it because I don’t think they do know how it works. the theory is shallow yet the outputs of, say, an LLM are of remarkably high quality in an area (language) that is impossibly baroque. the lack of theory and fundamental understanding presents a huge problem for them because it means “improvements” can only come about by throwing money and conventional engineering at their systems. this is what I’ve heard from people in the field for at least ten years.

      to me that also means it isn’t something that needs to be countered. it’s something the context of which needs to be explained. it’s bad for the ai industry that they don’t know what they’re doing

      EDIT: also, when i say the outputs are of high quality, what i mean is that they produce coherent and correct prose. im not suggesting anything about the utility of the outputs

      • jaschop@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        5 months ago

        I think I heard a good analogy for this in Well There’s Your Problem #164.

        One topic of the episode was how people didn’t really understand how boilers worked, from a thermal mechanics point if view. Still steam power was widely used (e.g. on river boats), but much of the engineering was guesswork or based on patently false assumptions with sometimes disastrous effects.

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          5 months ago

          another analogy might be an ancient builder who gets really good at building pyramids, and by pouring enormous amounts of money and resources into a project manages to build a stunningly large pyramid. “im now going to build something as tall as what will be called the empire state building,” he says.

          problem: he has no idea how to do this. clearly some new building concepts are needed. but maybe he can figure those out. in the meantime he’s going to continue with this pyramid design but make them even bigger and bigger, even as the amount of stone required and the cost scales quadratically, and just say he’s working up to the reallyyyyy big building…

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 months ago

      I mean if you ever toyed around with neural networks or similar ML models you know it’s basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.

      There’s a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There’s no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.

      In other words, “engineers don’t know how it works” can have two meanings - that they’re hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don’t have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it’s not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don’t know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn’t collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I’m aware, largely true, or at least I haven’t seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it’d be a major achievement everyone would be talking about.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 months ago

      Another ironic point… Lesswronger’s actually do care about ML interpretability (to the extent they care about real ML at all; and as a solution to making their God AI serve their whims not for anything practical). A lack of interpretability is a major problem (like irl problem, not just scifi skynet problem) in ML, you can models with racism or other bias buried in them and not be able to tell except by manually experimenting with your model with data from outside the training set. But Sam Altman has turned it from a problem into a humble brag intended to imply their LLM is so powerful and mysterious and bordering on AGI.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      Not gonna lie, I didn’t entirely get it either until someone pointed me at a relevant xkcd that I had missed.

      Also I was somewhat disappointed in the QAA team’s credulity towards the AI hype, but their latest episode was an interview with the writer of that “AGI as conspiracy theory” piece from last(?) week and seemed much more grounded.

      • BurgersMcSlopshot@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        the mention in QAA came during that episode and I think there it was more illustrative about how a person can progress to conspiratorial thinking about AI. The mention in Panic World was from an interview with Ed Zitron’s biggest fan, Casey Newton if I recall correctly.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      Ah, the site requires me to agree to “Data processing by advertising providers including personalised advertising with profilingConsent” and that this is “required for free use”. A blatant GDPR violation, love-lyy!

      • e8d79@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        Don’t worry about it. GDPR is getting gutted and we also preemptively did anything we could to make our data protection agencies toothless. Rest assured citizen, we did everything we could to ensure your data is received by Google and Meta unimpeded. Now could someone do something about that pesky Max Schrems guy? He keeps winning court cases.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        this is a more perfect description than any I could’ve come up! my thesis was largely on what a boon it would prove to thieves (although I recognize that flavour of thief probably varies by country and not all have them)

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      More evidence for my conspiracy theory that all companies have switched their PR strategies to full-time ragebaiting. wake up sheeple

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Oh joy, I can perform a threat display by twirling it around my head like a bolo. I think I will get the pink or bright yellow one

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      In the last couple of years I have noticed that people have been using single purpose phone holders/straps (as opposed to a multipurpose thing holder like a handbag etc.), so I understand this as apple coming in a little late trying to cash in on a trend. That being said: Apple don’t try to make their soft material products to last, so I expect this to be hot garbage.

    • e8d79@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      I think they need to hire an English teacher for their marketing department.

      Introducing iPhone Pocket: a beautiful way to wear and carry ____ iPhone

      Please complete the sentence, a smartphone isn’t a person.