How AI Is Changing The Way We Speak
Have you noticed a subtle shift in how people talk? It's a phenomenon that has experts worried: we are beginning to adopt the linguistic quirks of AI. Phrases and patterns once unique to large language models (LLMs) like ChatGPT are now seeping into our daily conversations, quietly reshaping human communication.
Linguist Adam Aleksic, author of Algospeak: How Social Media Is Shaping the Future of Language, recently raised this concern in an article for The Washington Post. He warns that people are starting to sound more and more like the AI they interact with.
How Machines Shape Our Words
To understand this shift, it's important to know that chatbots like ChatGPT, Claude, or Gemini don't think in language as we do. They convert words into numerical representations on a complex map and then predict the next most probable word based on vast amounts of training data. This technical process, while powerful, creates distinct linguistic biases.
For example, researchers at the University of Florida discovered that ChatGPT uses the word "delve" with unusually high frequency. This is likely because the human evaluators who helped fine-tune the AI, many from countries where "delve" is more common, reinforced its usage.
From Academia to Everyday Chat
This AI-driven overuse is no longer confined to chatbot windows. "Overuse has spread into global culture," Aleksic points out. Since ChatGPT's launch in late 2022, the appearance of "delve" in academic publications has surged tenfold, as researchers using AI for writing assistance inadvertently absorb its style.
The trend isn't just for academics. A recent study highlighted in Scientific American revealed that people are now using "delve" more often in casual, spontaneous conversations. This indicates that machine-generated language patterns are successfully filtering into our everyday speech.
The Feedback Loop and Hidden Biases
Psycholinguistics teaches us that the more we are exposed to a word, the more likely we are to use it. When AI systems repeatedly serve up certain words or phrases, we begin to internalize them, and they become a natural part of our vocabulary. This creates a feedback loop: people start sounding more like machines, and in turn, future AI models are trained on text that is already influenced by previous AI.
While a new word becoming popular isn't necessarily harmful, Aleksic warns that the implications go deeper than just vocabulary. "AI models are not neutral," he states. Along with linguistic quirks, they also perpetuate gender, racial, and political biases from their training data. These biases are harder to spot than an overused word but are being absorbed into our culture all the same, blurring the line between human and artificial communication.