How AI Is Changing The Language Of Science
The New Normal: AI's Influence on Scientific Language
A groundbreaking new paper reveals that ChatGPT has had an "unprecedented" impact on scientific writing, leading to a noticeable increase in what researchers are calling "flowery" language. This shift in academic tone was uncovered by a joint effort from researchers at the University of Tübingen and Northwestern University, who sought to measure the true extent of AI's adoption in the research community.
Unpacking the Evidence
To quantify the change, the team analyzed a massive dataset of over 15 million biomedical abstracts from the PubMed library. Their paper meticulously compared language used before and after the launch of ChatGPT in November 2022. The results were stark: certain elaborate verbs, such as “delves”, “underscores”, and “showcasing”, appeared with far greater frequency than in previous years.
This change is distinct from past linguistic trends. During the Covid-19 pandemic, for example, the vocabulary shift was dominated by nouns directly related to the research, like “respiratory” or “remdesivir”. The current trend, however, is purely stylistic. A 2023 study highlighted in the analysis exemplifies this ornate style: “By meticulously delving into the intricate web connecting [...] and [...], this comprehensive chapter takes a deep dive into their involvement as significant risk factors for [...].”
The findings, published in Science Advances, suggest these changes are widespread. The analysis indicates that at least 13.5% of all abstracts published in 2023, translating to around 200,000 papers, were likely polished or partly written using LLMs.
Broader Implications and Future Risks
“We show that LLMs have had an unprecedented impact on scientific writing in biomedical research, surpassing the effect of major world events such as the Covid pandemic,” stated Ágnes Horvát, a study co-author and professor at Northwestern’s School of Communication.
The study, which the authors explicitly noted did not use LLMs for its own writing or editing, points to a more complex future for academic publishing. While many researchers use tools like ChatGPT to improve grammar and readability, this convenience comes with significant risks.
The paper warns: “LLMs are infamous for making up references, providing inaccurate summaries, and making false claims that sound authoritative and convincing.” An author might correct an AI's factual error about their own work, but it is much harder to spot mistakes in an AI-generated literature review.
A key concern is the risk of homogenisation. If a large segment of the scientific community relies on the same AI tools, it could lead to less diverse and novel writing styles, potentially degrading the overall quality of scientific communication. In light of these findings, the researchers strongly advocate for a reassessment of current policies and regulations surrounding the use of LLMs in science.