Back to all posts

Are We All Starting to Talk Like AI

2025-09-10Sydney Lake3 minutes read
Artificial Intelligence
Communication
Technology

A strange new phenomenon is seeping into our online conversations, making them feel increasingly artificial. The observation comes from an unlikely source: OpenAI CEO Sam Altman, who noted that the line between human and AI-generated text is becoming unnervingly blurry as people adopt the linguistic quirks of large language models.

Altman's 'Strangest Experience' with AI-Speak

It started with what Sam Altman described as the "strangest experience" while reading a Reddit thread about Codex, OpenAI's tool for developers. The thread was so overwhelmingly positive that his initial reaction was to dismiss it as fake or bot-generated, even though he knew the product's growth was genuinely strong.

In a post on X, Altman listed several factors contributing to this sense of artificiality. He pointed out that "real people have picked up quirks of LLM-speak," and that this is compounded by the echo-chamber effects of online communities, extreme hype cycles, and social media platforms optimizing for engagement. His sensitivity is also heightened by the fact that other companies have used astroturfing tactics. The result, he concluded, is that "AI Twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago."

Scientific Research Backs Up the Anecdote

Altman's observation isn't just a gut feeling; it's backed by emerging research. Hiromu Yakura, a postdoctoral researcher at the Max Planck Institute for Human Development, noticed his own vocabulary changing about a year after ChatGPT's debut. This led to a formal study that analyzed millions of texts, videos, and podcasts.

The researchers found a significant increase in classic ChatGPT words—such as "delve," "examine," and "explore"—in human-generated content following the AI's release. Levin Brinkmann, a co-author of the study, explained to Scientific American that "the patterns that are stored in AI technology seem to be transmitting back to the human mind."

Further evidence comes from a study by UC-Berkeley, which found that ChatGPT's responses tend to reinforce dialect discrimination by favoring Standard American English. This preference for a specific dialect helps create a standardized "AI-speak" that can influence users, regardless of their native dialect.

The Human Response: Resisting and Re-engineering

While some are unconsciously adopting AI's tone, others are actively working to control it. In a LinkedIn article, nerve surgeon Vaikunthan Rajaratnam described how he has been reverse-engineering ChatGPT to sound more like him. He acknowledges the pitfalls, such as "diminishing authenticity" and the loss of personal voice, but has found a way to mitigate them.

"Through carefully crafted prompts and iterative refinement, I’ve tuned it to reflect my tone, my vocabulary, my way of thinking," Rajaratnam wrote. By doing so, the AI's output now sounds "eerily familiar… because it sounds like me."

AI's Fundamental Weakness: A Human Advantage

As the conversation around AI's influence grows, other leaders are pointing out its inherent limitations. Entrepreneur Mark Cuban offered some solace to the "Anti AI crowd" in a recent post on Bluesky, highlighting a key difference that will always favor humans.

"The greatest weakness of AI is its inability to say ‘I don’t know’," Cuban wrote. He argued that this lack of humility is a critical flaw. "Our ability to admit what we don’t know will always give humans an advantage," he added, reminding us that true intelligence isn't just about having answers, but also about recognizing the limits of one's knowledge.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.