Back to all posts

How AI Amplifies The Mandela Effect And False Memories

2025-07-30Sarah Wells5 minutes read
Artificial Intelligence
Psychology
Misinformation

Darth Vader never said “Luke, I am your father.” The beloved children’s books were called the Berenstain Bears, not the Berenstein Bears. And the friendly cow on Laughing Cow cheese packaging has never sported a nose ring.

These are just a few famous examples of the Mandela effect—a strange phenomenon where a large group of people collectively misremembers a detail, event, or phrase. While these quirks of memory are mostly harmless, emerging technologies like generative artificial intelligence could create similar confusion on a much larger and more consequential scale. Experts in human memory and AI are now trying to understand what role AI will play in shaping our future memories.

What is the Mandela Effect

The Mandela effect describes a type of shared false memory where many people recall the exact same incorrect information about something. Wilma Bainbridge, an assistant professor of psychology at the University of Chicago who has studied the phenomenon, notes its uniqueness. "When we think of false memories, we usually think of them in an individual way," she says. "What's really striking about the Mandela effect is that it is a form of false memory that occurs across people.”

The term was first used in 2009 by paranormal researcher Fiona Broome. She discovered that she and many others shared a vivid, but incorrect, memory of South African President Nelson Mandela dying in prison during the 1980s. In reality, Mandela passed away in 2013 after being released from prison.

With the help of social media, countless other examples have been uncovered, often related to millennial childhoods. Although these shared misrememberings are usually trivial, they can make us question the reliability of our own minds.

The Science Behind Our Malleable Memories

While the Mandela effect is a relatively new area of study, the science of false memories has been explored for decades. Aileen Oeberst, a professor of social psychology at the University of Potsdam, explains that our memories are highly fallible. A key reason is that the brain's hippocampus is used for both imagination and memory storage.

"We know from research that if people imagine something repeatedly, they tend to believe at some point that they actually experienced it," Oeberst states. When you recall a memory, your brain reconstructs it rather than playing it back like a video. This reconstruction process leaves it open to errors. We might fill in gaps with details we expect to be there or color a memory with our emotions.

However, Bainbridge’s research shows that the Mandela effect doesn’t always follow these rules. In a 2022 study, she and co-author Deepasri Prasad found that these collective false memories can even form in opposition to common stereotypes.

Putting False Memories to the Test

To better understand how these collective errors happen, Bainbridge and Prasad studied people's memories of popular icons like Curious George, the Monopoly Man, and the Volkswagen logo. A classic example they explored is the Fruit of the Loom logo.

Many people falsely remember the logo—a cluster of fruit—emerging from a cornucopia. “The common false memory is that there's a giant cornucopia around the fruit,” Bainbridge says. “But we see fruit so often in our daily lives and when do we ever see a cornucopia?”

Interestingly, when presented with the real logo, the false cornucopia version, and another manipulated version with the fruit on a plate, participants consistently chose the cornucopia. The research suggests that what people misremember is remarkably consistent and that repetition strengthens these false memories. This very principle is what makes AI-generated misinformation so concerning.

How AI Is Poised to Supercharge False Memories

If the Berenstain Bears are a classic Mandela effect, the viral AI-generated image of Pope Francis in a giant puffer jacket is its modern counterpart. "The pope in a fluffy coat was one of the first [generative AI images] that went viral,” says Jen Golbeck, a professor at the University of Maryland who studies online trust. "There's probably people who saw that image and didn't realize that it was [AI] generated."

Golbeck notes a perfect storm for misinformation: the rise of "fake news" sites, eroding institutional trust, and increasingly realistic AI content. Even when people know they should be suspicious, it's becoming harder to spot fakes. A major risk, according to Oeberst, is that our brains are wired to forget the source of information faster than the information itself. We might remember the image of the Pope in the jacket but forget it was an AI fake.

Protecting Our Reality in the Age of AI

Because this technology's influence is so new, researchers are just beginning to study its long-term effects on memory. They are eager to explore whether false AI images are more easily believed if they confirm our existing biases and whether AI can be used to intentionally reinforce false memories.

So what can we do now to protect our memories from being corrupted? Golbeck emphasizes the importance of community and verification.

“One important step is to really establish a cohort of people that you do trust,” she advises. “Like journalists, scientists, politicians, who you've really evaluated and are going to tell you correct information, even if it's not what you want to hear. I think that's critical.”

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.