Back to all posts

How AI Like ChatGPT Really Affects Your Brain

2025-07-14Julien Hernandez4 minutes read
Artificial Intelligence
Cognitive Science
Education

Recent discussions around artificial intelligence, particularly Large Language Models (LLMs) like OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude, are stirring up significant debate. While many conversations are valid, some verge on moral panic. A prime example is a recent column in Les Échos titled "How ChatGPT fries our brain," which attributes conclusions to a scientific study on ArXiv that the paper simply does not make.

The Misinformation Spark

According to Dr. Albert Moukheiber, a doctor in neuroscience and clinical psychologist, the confusion stems from a Twitter thread. "This misinformation about supposed brain atrophy, in my opinion, comes from a thread published by a tech entrepreneur who likely fell into a trap set by the researchers... by having an LLM summarize the article," he explains. Indeed, the scientists had embedded specific instructions for language models in their pre-publication to prevent such misinterpretations.

"Gaspard Koenig's column largely repeats what is said in this Twitter thread, which leads me to believe he did not read the study. For anyone familiar with neuroscience, it's obvious at a glance: it's simply impossible to observe brain atrophy with an electroencephalogram (EEG). That requires structural MRI measurements. We must stop making scientific articles say what they don't," Moukheiber laments.

What the Science Actually Reveals

Fortunately, several outlets, like France Info and Le Monde, have reported on the study with more rigor. So, what does the research actually tell us? The study aimed to compare the cognitive and cerebral impact of different tools during an essay-writing task under three conditions:

  1. Using only a Large Language Model (LLM) like ChatGPT.
  2. Searching for information via a classic web browser.
  3. Relying solely on one's own knowledge, with no technological aid.

The researchers measured both psychological parameters (content recall, satisfaction, sense of ownership) and brain data via EEG. The study found that using technology decreases cognitive load—the mental effort required for a task.

  • The use of a search engine leads to a decrease in internal cognitive load (related to working memory and personal elaboration).
  • The use of an LLM leads to a more global decrease in cognitive load.

These results suggest that LLMs like ChatGPT help us reallocate our mental resources. They lighten certain tasks, freeing up our minds for more complex executive functions like planning and structuring arguments. This is an expected outcome, reflecting a cognitive offloading similar to using a calculator or a GPS. In short, ChatGPT doesn't "fry" our brain; it reduces our cognitive load.

AI in Education A Double-Edged Sword

Instead of fueling alarm, the study opens a more productive debate: what is the proper place for these tools in education? The researchers found that the order of exposure to technology matters. Participants who first used an LLM and then had to write without it showed weaker neural connectivity compared to those who started without technology and then moved to an LLM. This suggests that starting with a tool like ChatGPT could alter how we cognitively engage when later required to work unaided.

Another complementary study explored our relationship with these models and its impact on our critical thinking. It suggests that the more we trust LLMs, the less we tend to engage our critical faculties.

Dr. Moukheiber explains, "Historically, we have learned to trust our tools: I don't doubt my calculator when it gives me a result. We apply this lack of epistemic vigilance to LLMs as well." The crucial difference is that an LLM is not infallible. It can hallucinate, make mistakes, and present false information convincingly. "The problem isn't so much the use of AIs but the trust relationship we build with them," Moukheiber notes. "On Twitter, you see many people using AI as a fact-checker... but who fact-checks the AI?"

Beyond the Individual User The Bigger Picture

These are not just theoretical concerns; they have real-world implications for educators and students. Rather than banning the technology, Moukheiber argues for guidance. "It seems much wiser to regulate the use of these tools, explaining how they work and in which contexts they can be useful."

However, this educational approach must avoid two major pitfalls.

First, it must not worsen existing inequalities. As noted in the Pedagogy and AI special issue of Cahiers pédagogiques, these tools often benefit students who are already skilled, while discouraging those with less cultural capital.

Second, we must be wary of the tech industry's narrative. An investigation by Marie Dupin for France Culture shows how big tech companies are trying to influence the educational agenda. LLMs are not neutral tools; they are shaped by economic and political forces.

The ultimate question is not just how we individually use AI, but how we, as a society, decide to use it together.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.