Back to all posts

Academias Silent Crisis Over AI and PhD Plagiarism

2025-08-27E.M. Wolkovich5 minutes read
Academia
Artificial Intelligence
Plagiarism

A recent experience left me in a state of mild horror, walking out of my university’s graduate studies building into the bright summer sun. It struck me that nearly three years have passed since ChatGPT's arrival, and as instructors and academic leaders, we have squandered this time.

I admit, I have personally failed. I haven't initiated any structured discussion with the trainees in my lab about it. I failed to form my own clear guidelines, and in doing so, I have let down my students for years.

A Wake-Up Call in a PhD Defense

This realization was triggered by my first time chairing a PhD defense where the thesis was partly written with generative AI. I only knew because the external examiner’s report flagged it. This led me to a paragraph in the thesis preface I would have otherwise missed. It described how this “original work” was “improved” in its clarity and structure by several AI models.

As the chair, my role is to ensure the examiner's concerns are addressed and university policies are followed. I found myself repeatedly reading the policies on AI in doctoral theses. Students are told they can generally use AI for editorial tasks like grammar and flow. However, using it “to get started with drafting needs approval from the supervisor.” Simply put, direct AI outputs cannot be included without citation, as it would be considered plagiarism. This applies at any stage, not just the final draft.

Defining a New Kind of Plagiarism

The idea of unquoted ChatGPT as plagiarism made perfect sense. Yet, it was the first time I had seen it explicitly stated. We are trained to guard against plagiarism—using someone else’s words. But this is different; it's not another person's voice, it's something else's. It is an amalgamation of countless voices, including our own from our academic writing, processed by an algorithm.

A colleague’s comment from the spring echoed in my mind: “It sounds like me.” And I suspect it does. The line between a summary and a direct rip-off of unquoted text from ChatGPT on a niche topic is incredibly thin, if it exists at all.

For years, students could have been plagiarizing from ChatGPT in their theses and academic work, and we have been silent. I had never heard a colleague equate ChatGPT use with plagiarism, but it is. And I don’t think we ever told our students.

A Sector in Denial

I started asking around, and the responses have been astounding. Some colleagues find it acceptable to paraphrase ChatGPT skillfully. Others, responding to a recent blog post on AI use, argue that prohibiting it is arbitrary. There's also a recurring sentiment that I'm missing the point—that AI is the key to reducing the burden of teaching writing. One colleague whispered that it was “a relief to not have to edit my students’ writing,” as if describing a new secret drug.

Is everyone using this magic drug of work-free writing? Perhaps, but they are missing the joy of science. As statistician Andrew Gelman wrote, writing is how he explores and organizes his thoughts. It’s a form of reasoning. If we don't train our students in this fundamental skill, what are we even doing?

Taking a Stand on AI in Academic Writing

It seems we have focused on undergraduates using AI while simultaneously promoting seminars on how to “integrate AI” into our work. We never explained to our future researchers what they lose by using generative AI for writing, or that it constitutes academic misconduct. At least, it is to me now.

After that defense, I established new guidelines for my lab. I enjoy writing and teaching it, so I want my trainees to develop their own skills without using generative AI for their text. I'm extending this rule further: I will only chair defenses where the student confirms they did not use generative AI in their writing, a position I recently explained.

Charting a Path Forward

I understand this choice could disadvantage non-native English speakers, and I want to avoid that. We need a serious conversation about how AI can level the playing field. I would prefer a non-native speaker to write a draft in their native language and then use AI for translation. They could then review and tweak it to ensure their original meaning is preserved, submitting the full workflow for transparency. This is a conversation worth having.

A conversation I don't want to have is with a graduate student who doesn't understand why they must disclose that their chapter drafts were written with ChatGPT. When I pointed this out to one student, they looked at me as if I were crazy. Why would they need to disclose that?

The simple reason is that their university could deem it academic misconduct. But no one—until me, at their PhD defense—had ever told them. And that is the unacceptable position that we, as an academic community, have silently allowed to happen.

Elizabeth M. Wolkovich is an associate professor at the University of British Columbia.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.