Back to all posts

Can AI Ever Truly Feel Blue Or See Red

2025-07-13Jordan Joseph4 minutes read
Artificial Intelligence
Cognitive Science
Language

We use colorful phrases like “feeling blue” for sadness or “seeing red” for anger without a second thought. But where does this understanding come from? Is it learned by seeing colors in the world, or simply by how we hear them used in language?

A fascinating new study from the University of Southern California and Google DeepMind investigated this very question. Researchers compared color-seeing adults, colorblind adults, professional painters, and ChatGPT to determine what truly shapes our grasp of these metaphors: physical vision or linguistic exposure.

EarthSnap

Lisa Aziz-Zadeh, a cognitive neuroscientist at USC’s Dornsife Brain and Creativity Institute, led the project. She questioned whether the massive statistical knowledge of an AI like ChatGPT could replicate the rich, firsthand understanding humans gain through their senses.

The Experiment: Humans vs. AI

For the study, participants completed online surveys where they matched abstract concepts like “friendship” or “physics” to colors on a digital palette. They also evaluated both common metaphors, such as being “on red alert,” and more unusual ones, like calling a celebration “a very pink party.”

The results for both color-seeing and colorblind adults were nearly identical. This suggests that a lifetime of language exposure can effectively compensate for missing visual data from the retinas.

Why Painters Did Even Better

Interestingly, the painters in the study performed best, correctly identifying tricky and unfamiliar metaphors 14 percent more often than non-painters. The researchers believe this is because artists have a deeper, hands-on connection to color. Their daily work mixing pigments creates a rich mental map that connects hue, lightness, and mood.

This aligns with previous work showing that our emotional links to colors are shaped by both universal patterns and cultural influences. The artists' advantage highlights how tactile experience can sharpen our conceptual understanding, much like how drawing or sculpting helps students learn better than just reading.

Experience Trumps AI's Statistics

The study's findings strongly support the grounded-cognition model, which proposes that our understanding of concepts is tied to the sensory traces of how we learned them. Direct experience with pigments proved more powerful than pure statistical analysis when faced with novelty.

While ChatGPT could handle common idioms, it struggled with unusual phrases like a “burgundy meeting.” When asked to justify its choices, the AI defaulted to cultural associations, stating, “Pink is often associated with happiness, love, and kindness.” This reveals a critical gap: large language models rely on patterns in text, not felt experience. They miss the nuances of meaning that are rarely written down.

This gap is more than just academic; it has real-world safety implications. An AI assistant that misinterprets a color-coded warning label could lead to dangerous outcomes. The solution may involve developing multimodal systems like CLIP that can connect language with camera input or haptic feedback.

Grounding AI in the Senses

One of the most surprising findings was that adults born without red-green vision still associated red with anger, a connection built through language rather than sight. This reinforces the idea that some color associations are nearly universal.

Researchers are already working on models that link words to pixels, training AI to identify a crimson apple or sketch a turquoise nebula. This process inches AI closer to how human toddlers learn, grounding vocabulary in the physical world. However, this approach comes with challenges, including the need for massive datasets and significant privacy concerns as cameras become more integrated into our lives.

As Lisa Aziz-Zadeh noted, “There’s still a difference between mimicking semantic patterns and the spectrum of human capacity for drawing upon embodied, hands-on experiences in our reasoning.”

The path forward involves not only technical innovation but also strong ethical guidelines. For now, if a chatbot gives you a strange color-based judgment, take it with a grain of salt. Until AI develops something akin to human senses, its understanding of our colorful world will remain a well-spoken but secondhand story.

The full study can be found in the journal Cognitive Science.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.