• 21 Posts
  • 295 Comments
Joined 2 年前
cake
Cake day: 2023年7月19日

help-circle


  • Thanks, this was an awful skim. It feels like she doesn’t understand why we expect gravity to propagate like a wave at the speed of light; it’s not just an assumption of Einstein but has its own independent measurement and corroboration. Also, the focus on geometry feels anachronistic; a century ago she could have proposed a geometric explanation for why nuclei stay bound together and completely overlooked gluons. To be fair, she also cites GRW but I guess she doesn’t know that GRW can’t be made relativistic. Maybe she chose GRW because it’s not yet falsified rather than for its potential to explain (relativistic) gravity. The point at which I get off the train is a meme that sounds like a Weinstein whistle:

    What I am assuming here is then that in the to-be-found underlying theory, geometry carries the same information as the particles because they are the same. Gravity is in this sense fundamentally different from the other interactions: The electromagnetic interaction, for example, does not carry any information about the mass of the particles. … Concretely, I will take this idea to imply that we have a fundamental quantum theory in which particles and their geometry are one and the same quantum state.

    To channel dril a bit: there’s no inherent geometry to spacetime, you fool. You trusted your eyeballs too much. Your brain evolved to map 2D and 3D so you stuck yourself into a little Euclidean video game like Decartes reading his own books. We observe experimental data that agrees with the presumption of 3D space. We already know that time is perceptual and that experimentally both SR and GR are required to navigate spacetime; why should space not be perceptual? On these grounds, even fucking MOND has a better basis than Geometric Unity, because MOND won’t flip out if reality is not 3D but 3.0000000000009095…D while Weinstein can’t explain anything that isn’t based on a Rubik’s-cube symmetry metaphor.

    She doesn’t even mention dark matter. What a sad pile of slop. At least I learned the word for goldstinos while grabbing bluelinks.



  • On a theoretical basis, this family of text-smuggling attacks can’t be prevented. Indeed, the writeup for the Copilot version, which Microsoft appears to have mitigated, suggested that some filtering of forbidden Unicode would be much easier than some fundamental fix. The underlying confusable deputy is still there and core to the product as advertised. On one hand, Google is right; it’s only exploitable via social engineering or capability misuse. On the other hand, social engineering and capability misuse are big problems!

    This sort of confused-deputy attack is really common in distributed applications whenever an automatic process is doing something on behalf of a human. The delegation of any capability to a chatbot is always going to lead to possible misuse because of one of the central maxims of capability security: the ability to invoke a capability is equivalent to the permission to invoke it. Also, in terms of linguistics and narremes, it is well-known that merely mentioning that a capability exists will greatly raise the probability that the chatbot chooses to invoke it, not unlike how a point-and-click game might provoke a player into trying every item at every opportunity. I’ll close with a quote from that Copilot writeup:

    Automatic Tool Invocation is problematic as long as there are no fixes for prompt injection as an adversary can invoke tools that way and (1) bring sensitive information into the prompt context and (2) probably also invoke actions.





  • I guess I’m the local bertologist today; look up Dr. Bender for a similar take.

    When we say that LLMs only have words, we mean that they only manipulate syntax with first-order rules; the LLM doesn’t have a sense of meaning, only an autoregressive mapping which associates some syntax (“context”, “prompt”) to other syntax (“completion”). We’ve previously examined the path-based view and bag-of-words view. Bender or a category theorist might say that syntax and semantics are different categories of objects and that a mapping from syntax to semantics isn’t present in an LLM; I’d personally say that an LLM only operates with System 3 — associative memetic concepts — and is lacking not only a body but also any kind of deliberation. (Going further in that direction, the “T” in “GPT-4” is for Transformers; unlike e.g. Mamba, a Transformer doesn’t have System 2 deliberation or rumination, and Hofstadter suggests that this alone disqualifies Transformers from being conscious.)

    If you made a perfect copy of me, a ‘model’, I think it would have consciousness. I would want the clone treated well even if some of the copied traits weren’t perfect.

    I think that this collection of misunderstandings is the heart of the issue. A model isn’t a perfect copy. Indeed, the reason that LLMs must hallucinate is that they are relatively small compared to their training data and therefore must be lossy compressions, or blurry JPEGs as Ted Chiang puts it. Additionally, no humans are cloned in the training of a model, even at the conceptual level; a model doesn’t learn to be a human, but to simulate what humans might write. So when you say:

    Spinal injuries are terrible. I don’t think ‘text-only-human’ should fail the consciousness test.

    I completely agree! LLMs aren’t text-only humans, though. An LLM corresponds to a portion of the left hemisphere, particularly Broca’s area, except that it drives a tokenizer instead; chain-of-thought “thinking” corresponds to rationalizations produced by the left-brain interpreter. Humans are clearly much more than that! For example, an LLM cannot feel hungry because it does not have a stomach which emits a specific hormone that is interpreted by a nervous system; in this sense, LLMs don’t have feelings. Rather, what should be surprising to you is the ELIZA effect: a bag of words that can only communicate by mechanically associating memes to inputs is capable of passing a Turing test.

    Also, from one philosopher to another: try not to get hung up on questions of consciousness. What we care about is whether we’re allowed to mistreat robots, not whether robots are conscious; the only reason to ask the latter question is to have presumed that we may not mistreat the conscious, a hypocrisy that doesn’t withstand scrutiny. Can matrix multiplication be conscious? Probably not, but the shape of the question (“chat is this abstractum aware of itself, me, or anything in its environment”) is kind of suspicious! For another fun example, IIT is probably bogus not because thermostats are likely not conscious but because “chat is this thermostat aware of itself” is not a lucid line of thought.


  • I think it’s the other way around. The memes are incredibly good at left vs right because left- and right-leaning people presume underlying facts and the memes reassure people that those facts are true and good (or false and bad, etc.) without doing any fact-finding.

    When we say “the right can’t meme” what we mean is that the right’s memes are about projecting bigotry. It’s like saying that the right has no comedians; of course they have people that stand up in front of an audience and emit words according to memes, tropes, and narremes, such that the audience laughs. Indeed, stand-up was invented by Frank Fay, an open fascist. (His Behind the Bastards episodes are quite interesting.) What we’re saying is that the stand-up routine is bigoted. If this seems unrelated, please consider: the Haitians-eating-pets joke is part of a stand-up routine that a clown tells in order to get his circus elected.


  • My name is Schmidt F. I’m 27 years old. My house is in the Mennonite region of Dutch Pennsylvania, where all the farms are, and I am trad-married. I work as the manager for the Single Sushi matchmaking service, and I get home every day by sunset at the latest. I don’t smoke, but I occasionally drink. I’m in bed by two candles and make sure I sleep until sunrise, no matter what. After having a glass of warm unpasteurized milk and doing about twenty minutes of prayer before going to bed, I usually have no problems sleeping until morning. Just like a real Mennonite, I wake up without any fatigue or stress in the morning. I was told there were no issues at my last one-on-one with my pastor. I’m trying to explain that I’m a person who wishes to live a very quiet life, as long as I have Internet access. I take care not to trouble myself with any enemies, like JavaScript and Python, that would cause me to lose sleep at night. That is how I deal with society, and I think that is what brings me happiness. Although, if I were to write code I wouldn’t lose to anyone.




  • The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:

    If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

    1. If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
    2. There’s no way that AI with an IQ of 300 will arrive within the next few decades.
    3. We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.

    Ignoring that IQ doesn’t really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn’t stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.

    Frankly I wish that they’d understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how systemctl works.


  • Jeff “Coding Horror” Atwood is sneering — at us! On Mastodon:

    bad news “AI bubble doomers”. I’ve found the LLMs to be incredibly useful … Is it overhyped? FUCK Yes. … But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, … and the first dot-com bubble … If you find this uncomfortable, I’m sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.

    T. chatbot booster rhetoric. So what are those examples, buddy? Very specifically? He replies:

    a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.

    Um hello‽ Maybe Jeff doesn’t have a spare room or room to sublet, but surely he can spare a couch or a mailbox? Let your friend use your mailing address. Store some of their stuff in your garage. To use the jargon of hackers, Jeff should be a better neighbor. This is a common issue for unhoused folks and they cannot climb back up the ladder into society without some help. Jeff’s reinvented the Hulk tacos meme but they can’t even eat it because printer paper tastes awful.




  • I love how this particular sci-fi plot gets rewritten every few years. We ought to make it a creative-writing exercise for undergraduates. I was struck by this utterly unhinged and somewhat offensive response on the orange site which starts with the single word “stirrups” and goes places:

    Despite speaking as if he’s doing his utmost to have a love affair with the Cambridge dictionary (and sounding like a twat at the same time) he’s not wrong in so far as not giving a shit is going to screw him over when the ability to push buttons in front of a television no longer matters. What happens when the guys hanging around doing meth on the sidewalk become the engineers that end up becoming the super biologist supermen that cure cancer make us able to hear what dogs hear and see extra colors? It’s unlikely, but it’s even less likely that everyone who is a middle class engineer will be so tomorrow. There is no moat in any profession outside of entrenched wealth or guns at the moment. There just isn’t - we’re in a permanent state of future shock along with the singularity. In large part because that’s what people decided that they wanted.