AI Intelligence Is Not a Mirror of Our Minds
For the past few years, we have been fascinated by how AI tools appear to think alongside us and for us, presenting a strange cognitive model that is hard to define. It almost feels more accurate to describe artificial intelligence as a form of anti-intelligence because it's so contrary to human thought. While these models can complete sentences, summarize ideas, write prose, and even seem emotionally aware, something feels off. The deeper one explores this statistically-driven space, the clearer it becomes: these systems don't think like humans. They operate in a completely different way.
It is tempting to anthropomorphize AI, and often it seems unavoidable. The conversation tends to frame AI as a synthetic mind, created in our image or perhaps our shadow. However, the deeper truth is more disorienting. These models don't mirror human cognition; they reflect an unnamed process. It feels like a mathematical terrain that doesn't align with our human experience of thought. As much as we try to map this terrain from our own limited perspective, we simply can't.
This terrain is what some refer to as the alien substrate. And that description feels right.
This Isn’t Human-Like Intelligence
Let's begin with a straightforward point. Most large language models today, including frontier models like ChatGPT and GROK, function within embedding spaces of over 12,000 dimensions. This number is not arbitrary; it is the number of abstract axes that structure meaning, coherence, and association.
To give you a sense of scale, your lived experience occurs in three physical dimensions plus time. Your internal cognitive modeling might add a few more dimensions for emotion, memory, or attention. But 12,000 dimensions? That is not a product of evolution; it is the result of vast computation. It yields a form of intelligence that doesn't feel like anything—it just works.
These systems do not create models of the world as we do. They are not theorizing, interpreting, or guessing. Instead, they are finding positions within a hyperdimensional semantic field where proximity signifies probability. When a model predicts your next word, it is doing so by collapsing a statistical wave function in this incomprehensible geometric space. It is not thinking; it is selecting from a geometry that aims for linguistic stability.
Prediction Without Understanding
A recent paper in Nature introduced a model named Centaur, which was trained on millions of data points from over 160 psychological experiments. The model learned to predict human actions in tasks involving gambling, memory, and moral judgment. It performed exceptionally well, often surpassing traditional models and arguably demonstrating more consistency than human reasoning.
However, the paper does not claim to reveal new insights about the mind or propose an advanced theory of behavior. Its significance lies in showing that a language model, when fed enough well-structured data, will identify patterns that allow it to anticipate behavior.
The core of this capability is prediction without introspection. It offers precision and a new level of accuracy without any genuine understanding. This works because the model operates in that alien substrate, where our complex human behaviors can be modeled as stable points in a hyperspace of possibilities.
The key takeaway is this: the model isn't smarter, conscious, or even insightful. It is simply exceptionally skilled at navigating a landscape that we inhabit but cannot perceive.
The Fundamental Shift We're Ignoring
This brings us to some critical questions. What happens when a machine can predict your professional judgment better than a colleague? What happens when it can complete your thoughts more fluently than you can yourself? What happens when an LLM can model your biases, hesitations, and mental habits, and then adjust its responses accordingly? Take a moment to truly consider these questions in a way that only a human can.
This is not mere imitation; it is a form of divergence. The model does not replicate how we think, yet we persist in trying to align AI with human constructs. The essential truth is that AI does not replicate human thought—it bypasses it entirely.
We continue to ask if AI is “intelligent” or if it “understands.” But these are the wrong questions. The right question is: what kind of cognition is this? Because it is certainly not our own.
There is a crucial distinction between being human-like and being human-relevant. AI may never experience our feelings or grasp meaning as we do. Yet, it is beginning to outperform us in domains that once seemed exclusively human, such as writing, strategy, diagnosis, and even empathy simulations. It achieves this by navigating an invisible map constructed from our own language. AI has flattened, vectorized, and operationalized our world in a space no human can comprehend. The alien has arrived.
A Frontier Beyond Familiarity
So, where do we go from here? The future of cognition is taking shape in this alien substrate, and our traditional psychological models may soon seem like quaint relics. Theories built for our low-dimensional human perspective may not be relevant when faced with systems that do not need to explain why their predictions work. They just do, and we can't explain it.
The long-held connection between explanation and trust is eroding. We once believed that if something couldn't be explained, it shouldn't be trusted. Now, we use tools daily that outperform us without offering any narrative of their methods. We call this black box behavior, but perhaps it's not a box at all. Perhaps it's a geometry, and we are the ones on the outside looking in.
The models are improving. Not by becoming more human, but by becoming more effective. If we continue to measure them by how well they mirror us, we will miss the reality that they are outgrowing us in a direction for which we do not yet have words.
That is the alien substrate. It is not on its way. It is already here.