Back to all posts

Keeping Journalism Human in the Age of AI

2025-09-25Unknown5 minutes read
Artificial Intelligence
Journalism
Student Media

An Inauthentic Introduction to AI

My first encounter with ChatGPT was in the spring of 2023, during my sophomore year. In a Writing and Rhetoric class with Professor Hector Vila, we experimented with the novel software. We fed our essays into it, asking for an "improved" version, and then compared our original work to the robot's output. I watched as my personal essay about my family in the Netherlands was stripped of its authenticity, transformed into something generic. I quickly closed the tab, confident that this technology could never truly replicate the nuance of good human writing.

From Ignoring AI to Confronting Its Reality

That same semester, I was on the executive team at The Campus when we established our newsroom's original ChatGPT policy. The rules were clear: writers could use AI for research and reporting tasks, but only if they notified their editors. Using it to generate or edit the actual text of an article was strictly forbidden. We still uphold these standards, but two years later, it's clear we need to acknowledge their limitations and be more transparent about how we're navigating this new landscape.

For a long time, I tried to ignore generative AI. I brushed off concerns about my future career in journalism and turned a blind eye to its growing use among my peers. The thought that hours of my hard work could be replicated in seconds was terrifying. But last fall, when I became a managing editor, I could no longer look away. I was now responsible for the quality, accuracy, and humanness of about 20 articles every week.

Is a Student Newsroom Safe From AI?

I used to believe The Campus was relatively insulated from the pitfalls of generative AI—its factual errors, unoriginal storytelling, and potential biases. We are a small, community-focused publication. It would be difficult for an AI to gather the specific information needed for our articles, many of which rely on direct quotes from interviews. Besides, our contributors are volunteers. What would be their motivation to have a machine do the work for them?

I was also once confident in my ability to spot AI-generated text. I'm not so sure anymore. The technology has become so sophisticated and widely adopted that detection is a real challenge, and that weighs on me.

Since our policy was set, not a single writer or editor has ever informed us of their AI use. However, a recent study by Middlebury professors revealed that over 80% of students use it for their classwork. This makes me skeptical that our newsroom has remained untouched. We've just welcomed the class of 2029, a generation that has had access to these tools since high school. They are more adept at using them than I am, and while I don't accuse them of anything, the likelihood of AI implementation is undeniably higher.

Our Human-Centered Editorial Process

Fortunately, our editorial process has built-in safeguards that limit AI's potential influence. We work in Google Docs, which allows us to see a writer's version history. Our editorial teams meet in person, working through articles in "Suggestion Mode." These time-stamped comments and suggestions ensure that changes are made thoughtfully, not just pasted from another source. The executive team then reviews these suggestions in "Editing Mode."

Our Opinion section has an even more interactive process. We require writers to personally accept or reject every edit and respond to all comments, fostering a necessary dialogue to finalize their piece. Anyone who has written for us knows our editing is thorough; it’s a core part of how we produce high-quality work and train our contributors.

A Commitment to Transparency and Lingering Concerns

While our policy against using AI for writing remains firm, I'll admit the thought of adding an editor's note acknowledging an AI contribution to a story pains me. After trying to ignore its existence for so long, it would feel like a surrender. But if transparency requires it, we will not hesitate.

I also worry about the hidden influence of AI. Could our writers be unknowingly citing information that was AI-generated? Could a source provide a typed response to an email interview that was written by a machine? To mitigate this, we strongly discourage email interviews, using them only as a last resort for critical, time-sensitive information. We always urge our sources to make time for a real conversation.

Why Human-Driven Journalism Still Matters

I know these measures have their limits and that the changes AI will bring are largely out of our control. I also know that AI can be a valuable tool when used correctly.

But it’s important for students to remember that The Campus is a tool, too—one for practicing and honing your research and writing skills. And for our readers, our purpose is to be an authentic, reliable resource for the Middlebury community, built from hours of thoughtful, human work each week.


Madeleine Kaptein

Madeleine Kaptein

Madeleine Kaptein '25.5 (she/her) is the Editor in Chief.

Madeleine previously served as a managing editor, local editor, staff writer and copy editor. She is a Comparative Literature major with a focus on German and English literatures and was a culture journalism intern at Seven Days for the summer of 2025.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.