Developer Offer
Try ImaginePro API with 50 Free Credits
Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.
Are AI Models Actually Thinking Like Human Brains
The AI Divide Hype vs Reality
There's a growing disconnect in the world of artificial intelligence. On one hand, tech leaders like Dario Amodei of Anthropic and Sam Altman of OpenAI forecast a near future with digital superintelligence, a "country of geniuses in a datacenter" capable of outthinking Nobel Prize winners. They speak of the 2030s as a period of unprecedented change. On the other hand, the AI most of us encounter daily feels more like a minor, often clumsy, assistant. From Zoom's generic icebreaker suggestions to Siri's limited utility and Gmail's AI inventing anecdotes, the experience can feel underwhelming, reminiscent of Microsoft's infamous Clippy.
This uneven rollout makes it easy to dismiss the entire field as hype. While there's certainly no shortage of it—Amodei's timelines may be more science fiction than fact—it is equally wishful to assume large language models are merely sophisticated word shufflers. Initially, I leaned toward this skeptical view, finding comfort in the idea that AI lacked genuine intelligence. I even found myself rooting for its failures. That all changed when I began using AI for my work as a programmer, compelled by the fear of being left behind.
From Skeptic to Believer A Programmer's Journey
Coding is widely considered one of AI's strongest suits due to its structured nature and the ability to automatically verify results. My conversion from skeptic to believer was swift. What began as using AI for simple lookups quickly evolved into assigning it small, contained problems. Before long, I was handing over complex tasks that represented the core of my professional training. I watched these models process thousands of lines of intricate code in seconds, identifying subtle bugs and orchestrating major new features. My work eventually led me to a team dedicated to leveraging and creating new AI tools.
Author William Gibson famously noted that the future is already here, just not evenly distributed. This perfectly captures the current AI landscape, which has created two distinct camps: the dismissive and the enthralled. While AI agents that can book our vacations remain a fantasy, some of my colleagues now use AI to write the majority of their code. Despite occasional errors, these tools have enabled me to achieve in an evening what might have once taken a month. I even built two iOS apps without any prior experience in that area.

“O.K., we’re good on bread crumbs. Now we’re looking for a pound of ground beef and a pound of veal.” Cartoon by Olivia Noble
A former boss once told me that interviews should focus on strengths, not the absence of weaknesses. Large language models certainly have weaknesses—they hallucinate, act servile, and can be tricked by simple puzzles. Yet, their strengths in fluency and their ability to grasp context were once considered the holy grail of AI. When you experience these capabilities firsthand, you can't help but ask: How convincing does the illusion of understanding have to be before you stop calling it an illusion?
When Does Illusion Become Understanding
Consider my friend Max's experience at a playground on a scorching summer day. When a sprinkler failed to turn on, he found himself facing a shed full of ancient, confusing pipes. Defeated, he took a photo and sent it to ChatGPT-4o. The AI identified the setup as a backflow-preventer system, pointed out a specific yellow ball valve, and suggested he turn it. Cheers erupted as water sprayed across the playground.
Did ChatGPT understand the problem, or was it just stringing words together? The answer could reveal something fundamental about understanding itself. Doris Tsao, a neuroscientist at UC Berkeley, told me, “The advances in machine learning have taught us more about the essence of intelligence than anything that neuroscience has discovered in the past hundred years.” Tsao, known for her work on how monkeys perceive faces, believes AI radically demystifies thinking.
Demystifying Thought What Neuroscientists See in AI
The journey to this point began in the 1980s when cognitive psychologists and computer scientists like Geoffrey Hinton tried to simulate thinking. They envisioned the brain as a network of neurons whose firing patterns constitute thought. They mimicked this by creating artificial neural networks that learn by adjusting the connection strengths between neurons to improve predictions—a process called deep learning. Initially met with skepticism, these networks grew larger and began solving problems that were once the domain of entire dissertations, from speech recognition to protein folding.
Today’s models are trained on vast portions of the internet through next-token prediction. The model guesses the next word in a sequence and adjusts its internal connections when it's wrong. Over time, it becomes so proficient at prediction that it appears to understand the world. This raises a profound question: Did these researchers, in modeling the brain, accidentally stumble upon the very mechanism of intelligence?
The Great Debate Stochastic Parrots or Compressed Genius
There is significant resistance to this idea. Writer Ted Chiang famously described ChatGPT as a “blurry JPEG of the Web,” suggesting it merely regurgitates a lossy copy of its training data. Linguist Emily M. Bender has called large language models “stochastic parrots,” arguing they produce text through statistical guesswork, not thought. These technical arguments are often bolstered by moral ones, pointing to AI’s environmental cost and its potential to marginalize workers.
However, the technical case against AI may be weaker than the moral one. Even AI skeptics like Harvard cognitive scientist Samuel J. Gershman admit, “Only the most hardcore skeptics can deny these systems are doing things many of us didn’t think were going to be achieved.” Jonathan Cohen, a neuroscientist at Princeton, argues that these models mirror the neocortex, the part of the human brain most associated with intelligence.
In his book “What Is Thought?”, Eric B. Baum argued that understanding is compression. Just as a line of best fit compresses scattered data points into a single, predictive rule, the neocortex distills raw experience into a compressed model of the world. Artificial neural networks do the same. An advanced open-source model like DeepSeek can write novels and suggest medical diagnoses, yet the model itself is a tiny fraction of the size of its training data. It is a highly compressed distillation of the internet. In this light, being a “blurry JPEG” is not a limitation but the very source of its intelligence.
Cognition as Recognition How AI Learned to See
We often confuse different types of thinking. While ChatGPT doesn't have a Joycean inner monologue, it demonstrates a form of understanding that is largely unconscious. Cognitive scientist Douglas Hofstadter argues that cognition is recognition—“seeing as.” We recognize a shape as a car or a series of marks as the letter “A.” This same process applies to abstract concepts, like a chess grandmaster seeing a weak position or a toddler recognizing a walk might lead to a croissant. This, for Hofstadter, is the essence of intelligence.
Hofstadter was once a leading AI skeptic. However, he was intrigued by the work of Pentti Kanerva, who in his 1988 book “Sparse Distributed Memory” proposed that thoughts and memories could be represented as coordinates in a high-dimensional space. In this space, similar concepts are located near one another, allowing one memory to trigger another—the scent of hay evoking summer camp. Hofstadter saw this as a “seeing as” machine.

“Bye, sweetie—have a day filled with social drama, drastically shifting friendships, and academic milestones, which you’ll describe to me later as ‘fine.’ ” Cartoon by Ali Solomon
Though Kanerva’s work faded, it has found a stunning echo in modern AI. LLMs represent words as vectors in a high-dimensional space, where relationships between words become geometric. In a groundbreaking revelation, researchers at Anthropic found that the mathematics behind the Transformer architecture—the “T” in ChatGPT—closely mirrors Kanerva’s decades-old model. Even Hofstadter, the staunch deflationist, was converted by GPT-4. “I’m mind-boggled,” he admitted. “You could say they are thinking, just in a somewhat alien way.”
The Brain in a Wind Tunnel AI as a Model for Neuroscience
The uncanny correspondence between artificial networks and the human brain has created a powerful synergy between AI and neuroscience. Scientists now use LLMs as a kind of “model organism” to test theories about human cognition. “Having a working system that instantiates a theory of human intelligence—it’s the dream of cognitive neuroscience,” said Princeton neuroscientist Kenneth Norman.
Just as the Wright brothers used a wind tunnel to test artificial wings and understand flight, scientists can now place thinking itself in a wind tunnel. Researchers at Anthropic have identified “circuits” in their model, Claude, that perform complex computations. For example, when asked to complete a rhyming couplet, a circuit first considers the rhyming word for the end of the line and then works backward to compose the rest. This suggests a form of planning that critics claimed was impossible for LLMs. For the first time, the inner workings of a mind seem to be coming into view.
The Roadblocks Ahead Why AI Is Not Human Yet
Despite this progress, it’s important not to get carried away. Performance gains from simply scaling up models are starting to level off. More fundamentally, there are huge gaps between AI and human intelligence. GPT-4 was trained on trillions of words, while a child needs only a few million to become fluent. Human infants learn efficiently because their learning is embodied, continuous, and driven by innate curiosity and emotions. They conduct experiments, pushing and prodding the world to build a model of it. An AI, by contrast, is trained on pre-chewed, disembodied data.
This lack of real-world grounding is why vision models still struggle with common-sense physics, generating videos of bouncing glass. It's also why LLMs fail at simple spatial reasoning tasks. Furthermore, once an AI model is trained, its “brain” is frozen. It cannot continuously update its knowledge from new experiences the way humans do, for instance, by replaying memories during sleep to refine our understanding of the world.
The Ghost in the Machine Existential Questions for Humanity
The current AI boom has echoes of the Human Genome Project—a time of immense hype that promised to solve everything from cancer to aging but delivered a more complicated reality. Tech leaders today make messianic pronouncements because they believe the fundamental picture of intelligence is solved. Some neuroscientists agree, which they find both exciting and terrifying.
“My worry is not that these models are similar to us. It’s that we are similar to these models,” Princeton neuroscientist Uri Hasson told me. If our intelligence is based on a mechanism simple enough for a machine to replicate, what does that mean for human specialness and our future? Hasson likens AI researchers to nuclear scientists in the 1930s, driven by a curiosity that could have grave consequences.
Douglas Hofstadter, who once saw understanding creativity as a holy grail, now feels a profound sense of disappointment. “It confirms a lot of my ideas, but it also takes away from the beauty of what humanity is,” he said. The fear is that the secrets of thinking might be simpler than we ever imagined—so simple that a machine could understand them, and perhaps one day, surpass us.
Compare Plans & Pricing
Find the plan that matches your workload and unlock full access to ImaginePro.
| Plan | Price | Highlights |
|---|---|---|
| Standard | $8 / month |
|
| Premium | $20 / month |
|
Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.
View All Pricing Details

