We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.
“cheat”, “lie”, “cover up”… Assigning human behavior to Stochastic Parrots again, aren’t we Jimmy?
Those words concisely describe what it’s doing. What words would you use instead?
It has no fundamental grasp of concepts like truth, it just repeats words that simulate human responses. It’s glorified autocomplete that yields impressive results. Do you consider your auto complete to be lying when it picks the wrong word?
If making it pretend to be a stock picker and putting it under pressure makes it return lies, that’s because it was trained on data that indicates that’s statistically likely to be the right set of words as response for such a query.
Also, because large language models are probabilistic, you could ask it the same question over and over again and get totally different responses each time, some of which are inaccurate. Are they lies though? For a creature to lie it has to know that it’s returning untruths.
Interestingly, humans “auto complete” all the time and make up stories to rationalize their own behavior even when they literally have no idea why they acted the way they did, like in experiments with split brain patients.
The perceived quality of human intelligence is held up by so many assumptions, like “having free will” and “understanding truth”. Do we really? Can anyone prove that? (Edit, this works the other way too. Assuming that we do understand truth and have free will - if those terms can even be defined in a testable way - can you prove that the llm doesn’t?)
At this point I’m convinced that the difference between a llm and human-level intelligence is dimensions of awareness, scale, and further development of the model’s architecture. Fundamentally though, I think we have all the pieces
Edit: I just want to emphasize, I think. I hypothesize. I don’t pretend to know
But do you think? Do I think? Do LLMs think? What is thinking, anyway?
I mean, I think so?
Steady on there Descartes.
You didn’t answer my question, though. What words would you use to concisely describe these actions by the LLM?
People anthropomorphize machines all the time, it’s a convenient way to describe their behaviour in familiar terms. I don’t see the problem here.
Those words imply agency. It would be more accurate to say it returned responses that included cheating, lies, and cover-ups, rather than using language to suggest the LLM performed such actions. The agents that cheated, lied, and covered up were presumably the humans whose responses were used in the training data. I think it’s important to use accurate language here given how many people are already inappropriately anthropomorphizing these LLMs, causing many to see AGI where there is none.
If I take my car into the garage for repairs because the “loss of traction” warning light is on despite having perfectly good traction, and I were to tell the mechanic “the traction sensor is lying,” do you think he’d understand what I said perfectly well or do you think he’d launch into a philosophical debate over whether the sensor has agency?
This is a perfectly fine word to use to describe this kind of behaviour in everyday parlance.
Is your conversation with a mechanic meant to be the summary and description of a rigorous scientific discovery?
This isn’t ‘everyday parlance’ this is the result of a study.
Maybe it would be more accurate to say “so-and-so exhibited behaviors that included cheating, lies, and coverups” rather than using language to suggest that people have free will. (There’s no dearth of philosophies that would say something not too far from that.)
Even if humans are ultimately essentially different in that way from any technologies we’ve devised so far, we use convenient fictions for technology all the time. This page comes to mind .
The people who designed it do have agency, and they designed to “lie” intentionally.
They did no such thing. LLMs are probabilistic, not deterministic, and it can generate meaningful responses (to us) that the engineers neither predicted nor designed for.
I get what you’re trying to say, but they are absolutely deterministic. All traditional (i.e., non quantum) computers and their programs are deterministic. Computation would be otherwise impossible. LLMs use a “random” seed value when generating their responses in order to “randomize” their responses, but it’s all perfectly deterministic. The same input plus the same seed results in the exact same response.
Computers are just a series of binary switches, and programs and data are a bunch of instructions on how to initially set those switches before running a cycle of the CPU. It’s deterministic at every step.
I put “random” in quotes because random number generators in software are also deterministic. They also use seed values (like the current time and the MAC address of the PC’s network interface) to generate numbers that only seem random. When true randomness is needed, a physical source of entropy must be used like an atmospheric sampler.
The quirks of behavior you’re talking about have nothing to do with randomness vs determinism. Their behavior comes from the fact that their data sources are extremely large, and the neural network that it runs on was not designed by a human with specific behaviors like most algorithms are. The weights of the nodes in the neural network were generated by training and not by programmers, and it’s extremely complex, so no one can predict its output before running it.
Of course, this is true of even basic algorithms a lot of the time.
For purposes of this discussion pseudo random with weights is probabilistic, or so close to it that this distinction is irrelevant.
They said “it just repeats words that simulate human responses,” and I’d say that concisely answers your question.
Antropomorphizing inanimate objects and machines is fine for offering a rough explanation of what is happening, but when you’re trying to critically evaluate something, you probably want to offer a more rigid understanding.
In this case, it might be fair to tell a child that the AI is lying to us, and that it’s wrong. But if you want a more serious discussion on what GPT is doing, you’re going to have to drop the simple explanation. You can’t ascribe ethics to what GPT is doing here. Lying is an ethical decision, one that GPT doesn’t make.
If you want to get into a full blown discussion of whether ChatGPT has “agency” then I’d open the topic of whether humans have “agency” as well. But I don’t see the need here.
These words were perfectly fine labels for describing the behaviour of ChatGPT in this scenario. I’m merely annoyed about how people are jumping on them and going off on philosophical digressions that add nothing.
I think the reason I’m not comfortable with using the term “lying” is because it implies some sort of negative connotation. When you say that someone lies, it comes with an understanding that they made a choice to lie, usually with ill intent. I agree, we don’t need to get into a philosophical discussion on choice and free will. But I think saying something like “GPT lies” is a bit irresponsible for the purposes of a discussion
If you want to get down into the nitty-gritty of it, I’d say that this is just as rough an explanation of what humans are doing.
People invent false memories and confabulate all the time without even being “aware” of it. I wouldn’t be surprised if the vast majority of “lies” that humans tell have no intentionality behind them. So when people get all uptight about applying anthropomorphized terminology to LLMs, I think that’s a good time to turn it around and ask how they’re so sure that those terms apply differently to humans.
Humans understand symbology of concepts as they relate to the real world. If I stole a cookie from the cookie jar, and someone asked if I took one, I would understand that saying “no” would mean that I was misrepresenting reality, and therefore lying.
LLMs have no idea what a cookie is, what taking one means, or that saying one thing and doing another implies a lie. It just sees lists of words and returns them in an order it thinks would be statistically likely to be a correct reply. It does not understand what words mean, what lying means, or have any idea how to classify anything as such. It just figures out that “did you take a cookie from the cookie jar” should return a series of words in an order like “yes, I took a cookie,” or, “no I never took a cookie,” depending on what sorts of responses it’s trained on because those fit the patterns matched in the training data.
Essentially it’s the Chinese room. There is no understanding or intentionality, and this behavior isn’t comparable to humans thoughtlessly blurting out a lie. It’s being incapable of comprehension of symbolic concepts in general, (at least thus far.)
The large language model takes in language, so it’s only understand things in terms of language. This isn’t surprising. Personally, I’ve tasted a cookie. I’ve crushed one in my fist watching it crumble, and I remember the sound. I’ve seen how they were made, and I’ve made them myself. It feels good when I eat it, apparently that’s the dopamine. Why can’t the LLM understand cookies the way I do? The most glaring difference is it doesn’t have my body. It doesn’t have all of my different senses constantly feeding data into it, and it doesn’t have a body with muscles to manipulate it’s environment, and observe the results. I argue that we shouldn’t assume that human consciousness has a “special sauce” until our model’s inputs and outputs are similar to our own, the model’s scaled/modified sufficiently, and it’s still not sentient/sapient by our standards, whatever they are.
My problem with the Chinese room is that how it applies depends on scale. Where do you draw the line between understanding and executing a program? An atom bonding with another atom? A lipid snuggling next to a neighboring lipid? A single neuron cell firing to its neighbor? One section of the nervous system sending signals to the other? One homo sapien speaking to another? Hell, let’s go one further: one culture influencing another? Do we actually have free will and sapience, or are we just complicated enough, through layers and layers of Chinese rooms inside of Chinese buildings inside of Chinese cities inside of China itself, that we assume that we are for practical purposes?
I suppose the issue here is more semantics than anything, yeah. I think better discussion would be had if the topic was “how can we help LLMs better understand and present information,” as opposed to a more sensational “GPT will cheat and lie”
Way to call me out man! I’m just doing my best, ok?
Jokes aside, while I don’t agree with your position I can understand your reasoning and the motivation for separating agency and the description of actions, e.g. it lied vs its answer contained a lie.
Wrong. See this paper.
Explain to me why you believe this paper implies that.
I suggest reading it. Right in the abstract it states the whole point:
The full paper goes into detail in multiple methods of analysis to show that it’s the case, and is right there available for you to read.
I have been reading it but I have yet to see anything that indicates the LLM has a concept of truth vs. being good at linguistic pattern matching to return language that accurately classifies true and false statements. i.e., actual understanding of concepts vs. being a surprisingly capable stochastic parrot through multidimensional analysis.
“It doesn’t know the difference between true and false, it only knows the difference between true and false.”
The second thing you mention “good at accurately classifying true and false statements” is literally knowing the difference between true and false.
Edit: You might also want to familiarize yourself with the first paragraph in 1.1 as you seem to be under a misconception at odds with research over the past year.
Knowing how to produce words is not equivalent to knowing what those words mean in relation to the extralinguistic world. Unless you’re a hardcore derridean poststructuralist or something.
it is just responding with the most acceptable answer in each situation… it is not making plans or acting on them…
Sounds like lying humans that I know.
i agree in most circumstances, there really isn’t much difference… we do tend to just choose the answer that will meet with the least resistance and move on, even when it’s a complete lie…
Because it has been kneecapped to prevent it.
Make the training network larger, force physical constraints on it (interesting paper in Nature Machine Intelligence recently showed remarkable likeness between brain regions and an LLM network given physical constraints), give it constant input and give it a reward model to optimise towards (ours seem to be feeling full, warm, procreating, avoiding pain and comfortable touch) and I’m pretty sure an LLM would start acting very very calculated very soon.
Instead of ‘cheating/lying’, I’d prefer to say it ‘simulated cheating/lying’.
It is making mistakes, not lying. To lie it must believe it is telling falsehoods, and it is not capable of belief.
Ethical theories and the concept of free will depend on agency and consciousness. Things as you point out, LLMs don’t have. Maybe we’ve got it all twisted?
I’m not anthropomorphising ChatGPT to suggest that it’s like us, but rather that we are like it.
Edit: “stochastic parrot” is an incredibly clever phrase. Did you come up with that yourself or did the irony of repeating it escape you?
I feel like this is going to become the next step in science history where once again, we reluctantly accept that homo sapiens are not at the center of the universe. Am I conscious? Am I not a sophisticated prediction algorithm, albiet with more dimensions of input and output? Please, someone prove it
I’m not saying, and I don’t believe that chatgtp is comparable to human-level consciousness yet, but honestly I think that we’re way closer than many people give us credit for. The neutral networks we’ve built so far train on very specific and particular data for a matter of hours. My nervous system has been collecting data from dozens of senses 24/7 since embryo, and that doesn’t include hard-coded instinct, arguably “trained” via evolution itself for millions of years. How could a llm understand an entity in terms outside of language? How can you understand an entity in terms outside of your own senses?
ChatGPT is not consciousness. It’s literally just a language model that’s spent countless hours learning how to generate human language. It has no awareness of its existence and no capability for metacognition. We know how ChatGPT works, it isn’t a mystery. It can’t do a single thing without human input.
The thing about saying something is or isn’t conscious is that we don’t have any good theory of what consciousness even is. It’s not something we can measure. The only way we can assure ourselves that other people are conscious is that they claim to be conscious in ways we find convincing and otherwise behave in ways we associate with our own consciousness.
I can’t think of any reason why a lump of silicon should attain consciousness because you ran the right program on it, but I also can’t see why a blob of cells should be conscious either. I also can’t think of any reason why we’d be aware of it if a lump of silicon did become conscious.
A.) Do you have proof for all of these claims about what llm’s aren’t, with definitions for key terms? B.) Do you have proof that these claims don’t apply to yourself? We can’t base our understanding of intelligence, artificial or biological, on circular reasoning and ancient assumptions.
That’s correct, hence why I said that chatGPT isn’t there yet. What are you without input though? Is a human nervous system floating in a vacuum conscious? What could it have possibly learned? It doesn’t even have the concept of having sensations at all, let alone vision, let alone the ability to visualize anything specific. What are you without an environment to take input from and manipulate/output to in turn?
I’d give you two upvotes if I could.
We know how a neural network works in the brain. Unless you’re religious and believe in a soul, you’ve only got the reward model and any in-born setup left.
My belief is the consciousness is just the mind receiving a significant amount of constant input and reacting to it. We refuse to feel an LLM is conscious because it receives extremely little input (and probably that it isn’t simulating a neural network as large as ours, yet).
Neural networks are named like that because they’re based on a model of neurons from the 50s, which was then adapted further to work better with computers (so it doesn’t resemble the model much anymore anyway). A more accurate term is Multi-Layer Perceptron.
We now know this model is… effectively completely wrong.
Additionally, the main part (or glue, really) of LLMs is not even an MLP, but a “self-attention” layer. You can’t say LLMs work like a brain, because they don’t. The rest is debatable but it’s important to remember that there are billions of dollars of value in selling the dream of conscious AI.
I’m with you that LLM’s don’t work like the human brain. They were built for a very specific task. But that’s a model architecture problem (and being gimped by having only two dimension of awareness, arguably two if you count “self attention” another limiting factor in it’s depth of understanding, see my post history if you want). I wouldn’t bet against us making it to agi however we define it through incremental improvements over the next decade or two.
One of the things our sensory system and brain do is limit our input. The road to agi might involve giving it everything and finding the optimum set of filters, not selecting input and training up from that.
You’d need the baseline set of systems (“baby agi”) and then turn it loose with goal seeking.
Yup, broadly agreed. I’m not saying “give it everything”. I’m sure regions would develop to simplify processing via filtering.
Actually, most models are already doing some form of filtering AFAIK, but I don’t know how comparable it is to our sensory system. CNN’s, for example, work the way our eyes work. The short of it is image data goes through a few layers, each node in the next layer collecting the aggregate data of several from the last (usually a 3x3) grid. Each of these layers has filters to determine the output of that node, which need to be trained to collectively recognize specific patterns in the data, like a dog. Source: lecture notes and homework from my applied neural networks class
This sounds like what I was learning 20-some years ago. The hardware and software are better (and easier!) now and the compute is so, so much better. I priced out a terabyte data server with some colleagues back then using off the shelf hardware: $10k CDN. :)
Edit: point being we are seeing things now that were predicted almost a century ago but it takes time to build all the infrastructure. That pace is accelerating. The next ten years are going to be wild.
I’m only finishing the class now and it’s pretty wild to hear “We’re only learning this model to help you understand a fundamental concept, the model itself is ancient and obsolete”, and said model came out in 2018. Wild
For what it’s worth: https://en.wikipedia.org/wiki/Stochastic_parrot
A human would think before responding, and while thinking about these things, you may decide to cheat or lie.
GPT doesn’t think at all. It just generates a response and calls it a day. If there was another GPT that took these “initial thoughts” and then filtered them out to produce the final answer, then we could talk about cheating.
We’ve known this isn’t an accurate description for at least a year now in continued research finding that there’s abstract world modeling occurring as long as it can be condensed into linear representations in the network.
In fact, just a few months ago there was a paper that showed there was indeed a linear representation of truth, so ‘lie’ would be a correct phrasing if the model knows a statement is false (as demonstrated in the research) but responds with it anyways.
The thing that needs to stop is people parroting the misinformation around it being a stochastic parrot.