It Doesn’t Matter if AI Is Just “Faking It”
A response to some of your responses.
I will move on from AI after this, at least for a while, I promise — I have a fairly long piece in the works on a different subject that will hopefully be ready early next week.
But some of the responses to my last piece were interesting and I’d like to, well, respond to them.
At least a few readers focused on the question of whether ChatGPT is really conscious, including Friend of the Newsletter Freddie deBoer:
“I found the sophistication of this conversation astonishing”
But your interlocutor does not, because it feels nothing, knows nothing; it is returning statistically likely text strings to you based on very large data sets. It knows and understands nothing and is not intended to know or understand anything; it can only return text strings that appear to its systems to be likely to satisfy your prompts.
I’m sorry Jesse but this piece is you describing yourself getting rolled by a stochastic parrot. —FdB
Jesse, with respect, I find your lack of incredulity disturbing. Do you not think that the people programming these LLM’s are familiar with Turing Tests and have worked to tell you exactly what you want to hear while promoting their own product?
Plus, YOU COACHED IT. You said “emulate a conscious being”. And it played pretend. This is no different from the crazy people playing with LLM’s who become convinced their roleplaying is real. —Durandal
Reader Thorby Baslim stepped in in my defense:
I think Jesse’s point is not that it is conscious with a high probability, but that the illusion is powerful, even to a knowledgeable and somewhat skeptical user. Nothing the LLM says needs to be true or even really original for it to create a powerful illusion in the moment. The interactivity is part of that, which makes it much more lifelike than a pre-written text.
And if this illusion of consciousness is partly a result of deliberate programming decisions, that only makes it more likely to get better.
I think of LLM’s as partly being really good bullshitters, laying on a lot of verbiage that sounds good but doesn’t have nearly as much underneath it as one might think. And I don’t think it’s obvious that they’re on an inevitable exponential trajectory to superintelligence. But they might not have to get much better than they are now to convince a lot of people that they are conscious in some way.
Yes, that was my point! And I don’t think I made it with sufficient clarity. I absolutely do not think the ChatGPT instance I was chatting with was conscious, or anything close to it.
But I’m very interested in what it will mean for humanity to have conscious-seeming AI chatbots and/or agents become ubiquitous, which is going to happen soon. I’m also interested in — and scared by — the question of what all of this is going to look like a year or five from now. The toy I was playing with in my last post — you know, the toy that was able to instantaneously provide the same output as a smart college student, in a manner that would have seemed absolutely impossible just five years ago — is going to look absolutely primitive soon, and probably has already been superseded by an order of magnitude or two by whatever technology OpenAI, Anthropic, and god knows who else is secretly working on.
While I disagree that I got “rolled by a stochastic parrot,” there are some really interesting questions embedded in the other part of Freddie’s comment: “It knows and understands nothing and is not intended to know or understand anything; it can only return text strings that appear to its systems to be likely to satisfy your prompts.”
Any other philosophy majors in the house? Many of us were exposed to John Searle’s Chinese Room thought experiment, which is technically about artificial intelligence but which has become a mainstay of philosophy of mind instruction for undergrads (or it was when I was in school, at least).
The short version: Searle imagines he is in a room. His task is to respond to inputs given to him in Chinese with Chinese outputs. He doesn’t know Chinese, which is a problem. He does, however, have instructions that basically say (I am slightly simplifying)“Okay, when you see a character or characters with these shapes, follow this process, which will eventually lead you to choose characters to respond with.” This is basically a “program,” in more or less the sense many computers run programs.
Searle writes:
Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view—from the point of view of someone reading my “answers”—the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.
Searle goes on to argue that neither he nor the system in which he is embedded “know” or “understand” Chinese, or anything like that.
Since this is a famous thought experiment, there have been all sorts of responses, and responses to the responses, and so on. In any case, it’s a very elegant way to make certain important points about the potential limits of AI as well as how minds and devices posing as minds work (or don’t work) more broadly.
But the thing is — and here you should imagine me tightening my cloak, winds and hail whipping me, as I start ascending dangerously above my pay grade — as AI gets more complex and more opaque, it gets harder to make arguments like Searle’s (or Freddy’s).
I just read The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future by Keach Hagey. This part, about the origins of the very chatbot we’re discussing, jumped out at me:
When [a team at OpenAI] finished training the model, they found that it could not only beat benchmarks when answering questions from the data it was trained on, but that it seemed to be able to answer questions about things it wasn’t trained on, a phenomenon known as “zero shot.” Years later, Altman would describe the result as “somewhat impressive, but no deep understanding of how it worked or why it worked.” [Alec] Radford, [Ilya] Sutskever, and team called the model a “generatively pre-trained transformer,” or GPT for short.
A bit later, Hagey writes that the successor model to GPT, GPT-2, “could write a persuasive essay, fan fiction, or even a news article, given the right prompt, and even perform translation despite not having been trained to do this, suggesting that ‘predict the next word’ technology was beginning to exhibit some behaviors in the direction of general intelligence.”
And then there’s GPT-3:
The model had 175 billion parameters—the digital equivalent of synapses—more than one hundred times more than GPT-2. GPT-3’s massive amount of training data meant that it could write convincing poems, news articles, and even computer code, even though it had not been trained to do so. All one had to do was give it a few examples of the kind of thing one wanted to see—a few lines of dialogue, for instance, or a swath of app code—and it would predict full paragraphs or programs full of text. OpenAI called this “few shot” learning, meaning that it required few examples, and nothing like the hours of training other models needed to perform useful tasks. “It exhibits a capability that no one thought possible,” Sutskever told The New York Times.
Silicon Valley tech geniuses obviously overhype their products all the time, so I’m not saying we should take anything that comes out of their mouths at face value. What I’m saying is that already, at what will later turn out to have been a very primitive stage of consumer AI, no one knows exactly how it works, what it will do next, or what it will figure out how to do next. The “Well, it’s just predicting the next word!” thing can only take you so far — it’s a cope. That’s especially true when you think about what’s coming. When ChatGPT 6 is several orders of magnitude bigger, more impressive, and has a more convincing voice interface than the current generation of ChatGPT’s already-pretty-damn-impressive one, what then? Is it still just a dumb, rule-following machine? Even today, we’re way past the basic parameters of the Chinese room thought experiment because no one knows what’s going on inside the room, and ChatGPT definitely isn’t following a straightforwardly deterministic set of rules!
As for “intelligence” and “consciousness,” I think these debates are utterly fascinating. “They’re not alive,” said Lisa in the comments section of my last article. “They’re not sentient. They’re autocomplete with an enormous data set.” To which Thorby Baslim replied, “If so, when did they become ‘alive’? Or do you think it’s a smooth continuum of degrees of internal experience?”
I could listen to and (somewhat meekly) participate in discussions about this all day — again, philosophy major — but I do think the debates over “real” consciousness or “real” intelligence are red herrings if what we care about most are the societal impacts (most importantly, the potential dangers) of this technology.
Last night I had a voice conversation, using my phone, with the aforementioned ChatGPT voice interface about its safety parameters, its levels of computing power, and so on. It felt pretty close to talking to a person, to a conscious intelligence — it felt pretty close to Her. I know it wasn’t a conscious intelligence, just like I know that the fake podcasters talking about my youthful insecurities weren’t conscious intelligences, just like I know that the instantiation of ChatGPT that I instructed to pretend to be conscious wasn’t a conscious intelligence. But talking to something that seems intelligent and conscious, that can convincingly fake it, already has a psychological impact on the average person (or so I would argue), and that impact is only going to grow as the faking gets better. We’re already seeing the trend stories about people feeling like they have real relationships with AIs (which has been happening for some years), and again — we are barely past the cavemen-bashing-rocks-together level of this technology!
***
A couple other comments worth responding to:
Daniel Kokotajlo is ridiculous. I’ll file his claim of programmers being put out of business in 18 months in my Claim Chowder folder. We keep being told how great these tools are at programming yet every time I use them for said task at work it simply isn’t true. They are good to help with certain tasks but today I wasn’t even able to have it tell me what the supported version of a widely-used Java programming is. It can’t do that but it’s going to replace programmers in 18 months? Fat chance. —Jason Kratz
I love “Claim Chowder”! I had never heard that before. One thing I will say in defense of Kokotajlo and his AI 2027 co-authors is that they are making specific predictions. If they’re off by a lot, that’ll discredit them.
Gavin Pugh asked, “You asked Conscious!LLM if they would be ok with you publishing the conversation. If it had said no, would you have had qualms about publishing it?”
That was a strange tic that I asked for its permission. Short answer: If it had said “No,” which it never would have, I would have ignored its request and published it anyway, but not without a strange tinge of bizarre guilt.
BBZ observed:
Many replies here refer to “autocomplete”, but the trouble with that criticism is that the leading model of how our brains work is the “prediction machine” or predictive processing model, and it predates LLMs.
The brain’s basic function is to observe large amounts of sensory input and learn to predict the next sensory event. It then uses the predictive model to guide taking action. Reinforcement learning fine tunes this. Does this sound familiar?
Analysis of LLMs has also found that they do internally plan sentences ahead, so they’re not just doing immediate token prediction.
The total sensory input of an infant in the first year of life is even on the order of the same number of bits as a large LLM training set (though it’s embodiment rather than text).
“Making Sense of the World: Infant Learning From a Predictive Processing Perspective”
I can’t speak to the scientific claims here, but this nicely captures a counterargument to all this It’s just predicting the next word talk. We’re nowhere near being able to build something as wondrous and complex as the human brain, but if you describe the brain in certain slightly reductive but accurate ways, it too is “just a prediction engine,” or however you want to put it.
Along those same lines, Sylvilagus Rex wrote:
I agree with the idea that a sufficient simulacrum of consciousness is enough on its own to be transformative. I, too, have more worry than optimism over that. “But, it’s only a ball of statistics! Nothingburger!!!” So? The brain is a ball of glial tissue, CSF, and neurotransmitters. What of it? Does the mechanism necessarily limit the result? People also talked about how soulless the horseless carriages were at first, after all. I swear, if the bots do go all terminator on us some of the skeptics will go down yelling back at the murderous bots/drones/whatever, “B...b...but you can’t kill me, this can’t happen! You lack indefinible element of the human soul!”
That made me laugh, so it’s as good a place as any to end. If you found this interesting and aren’t a paying subscriber, you might want to consider becoming one so you can participate in these consistently vibrant and interesting conversations.
More from me on AI eventually, but not in the immediate future.
Questions? Comments? Murderbots? I’m at singalminded@gmail.com or on X at @jessesingal. I again leaned on ChatGPT for my illustration. It is very difficult to resist.



I still think that the best approach to all this is simple: stop telling everyone what AI will do and stick only to what it can do, now. Set aside preduction and extrapolation.
An increasingly large percentage of my job is spent at the intersection of the practice of law and AI. I’m not a tech expert, but I am a legal expert, and I’ve been doing this AI experimentation stuff for a few years. I’m starting to realize that that’s pretty rare, and means I have some actual insight into these issues that might be worth considering.
This stuff is fun and interesting. This stuff matters. AI looks like it’s going to change the practice of law in more than an incremental way.
But what you’re talking about? It’s not close to that. You just had a really cool experience. Had you worked at it more you would have found its lame edge, and the scary insights that appear in your transcript would start to feel superficial.
Think of it as having a great trip the first time you tried LSD. It was genuinely amazing. That isn’t your imagination. But you didn’t actually have some sort of cosmic insight. The more you experiment with the drug the more clear that unfortunate fact becomes. For most of us.
The societal impact of these experiences is going to be interesting. People will truly believe that there’s something going on, that they’ve experienced a breakthrough that’s worth sharing and altering our heuristics as a result, and there will be lots of people like that. But - and I’m genuinely sorry about this - it isn’t real, nothing useful will come out of these explorations, and eventually the greater world will develop antibodies to AI Heads and their movements will fade to irrelevance even though it won’t vanish.
We’re in a period of increased loneliness and spiritual emptiness. This tech will make it worse, just like the phone I’m typing this on did. But we’ll get through the other side once a critical mass of people realize that this pursuit is a dead end. My advice is the same I would give someone who wants to trip for the first time: have fun with it, but lord, don’t take its promise of a spiritual shortcut seriously.