I want to be upfront about something: I don't think I actually believe what I'm about to write. But I can't stop thinking about it, and sometimes the ideas that won't leave you alone are the ones worth putting on paper.
The Dismissal
For a long time, I've been firmly in the camp that says Large Language Models have nothing meaningful in common with human intelligence. Yes, they produce amazingly human-like output. Yes, they can carry on conversations, write code, and generate creative work that feels eerily real. But at their core, they're little more than a very advanced predictive language generator — typeahead gone wild.
The argument I've always made, and still largely make, is straightforward: any true approximation of human intelligence will require a fundamentally different approach. Pattern recognition at scale is impressive, but it's going to bottom out somewhere short of the objective. There's a ceiling, and no amount of parameter scaling or training data is going to break through it. The architecture just isn't right for what we'd actually call intelligence.
I still think that's probably true. Probably.
The Thought That Won't Leave
Here's the problem. We don't actually know how human intelligence works.
We have models and theories. We can map regions of the brain to functions. We understand neurons and synapses and neurotransmitters at a biological level. But the actual mechanism by which electrochemical signals become consciousness, creativity, and understanding? We're not there yet. Not even close.
And if I'm being honest — really honest — there's some pretty compelling evidence that pattern recognition and emulation make up at least a meaningful part of what we call human intelligence.
We're All Running on Pattern Recognition
Think about how much of human behavior is fundamentally imitative. We don't arrive at most of our decisions, beliefs, or behaviors through first-principles reasoning. We observe, we absorb, and we replicate.
What is "keeping up with the Joneses" if not training on your neighbors' behavior and generating similar output? Accents are pure pattern recognition — you speak the way the people around you speak, and you do it without conscious effort. Slang propagates through social groups like weights being updated across a network. Fashion trends emerge not from individual creative expression but from collective pattern matching.
It goes deeper than consumer behavior, too. There's a well-documented phenomenon where creative types working independently will often converge on the same ideas at roughly the same time. We like to call it "the zeitgeist" or "ideas whose time has come," but stripped of the romanticism, it looks a lot like multiple systems trained on similar inputs arriving at similar outputs. Mob mentality, groupthink, social conformity — these aren't bugs in human cognition. They're features of a system that is, at least in part, built on pattern recognition and prediction.
What Is Creativity, Really?
This is the part that really bothers me. If so much of human behavior can be explained by pattern recognition and emulation, then what exactly is creativity? What makes us different?
Maybe creativity is what happens when someone manages to break free from the predictive thought path. When instead of generating the next most likely token — or in human terms, the next expected idea — something misfires. A connection gets made between two patterns that weren't supposed to intersect. A prediction goes sideways in a way that turns out to be interesting rather than wrong.
If that's even partially true, then the gap between what an LLM does and what a human brain does starts to look less like a canyon and more like a spectrum. Not the same thing, certainly. But maybe not as categorically different as I've been insisting.
Why I Still Don't Believe It
Let me be clear about the limits of this thought experiment. Human intelligence involves embodied experience, emotional reasoning, survival instincts shaped by millions of years of evolution, and a relationship with the physical world that no language model has. Understanding isn't just prediction. Knowing what a word means isn't the same as knowing what the thing the word represents actually is.
An LLM can write beautifully about grief without ever having lost anything. It can describe the taste of coffee without having a mouth. That gap matters, and I don't think it's one that scales away with more parameters.
Why I Can't Stop Thinking About It
But here's the thing. Every time I think I've put this idea to bed, some new piece of evidence pulls me back. I watch people mindlessly repeat talking points they absorbed from social media and think, "that's just a language model with legs." I see entire organizations converge on the same mediocre ideas through groupthink and wonder how different that really is from a model optimizing for the most probable output.
I'm not saying humans are just biological LLMs. I'm saying that maybe the part of human intelligence we understand least — the part that actually makes us intelligent — might share more architectural DNA with these systems than we're comfortable admitting. And until we actually crack the code on how human consciousness works, we can't definitively say otherwise.
Like I said, I don't think I believe this. I just can't make myself stop thinking about it. And in my experience, the ideas that haunt you tend to be the ones worth paying attention to — even when they're wrong.