AI On Campus The New Professor Student Divide
A New Conflict on Campus: Professors Under Fire for AI Use
The debate over artificial intelligence in higher education has taken a new turn. While faculty were once primarily concerned with students using AI to cheat, the tables have turned, with students now expressing frustration over their professors' increasing reliance on the same technology. On platforms like Rate My Professors, complaints are mounting about instructors overusing AI, diminishing the educational experience.
With the average yearly tuition for a four-year institution in the U.S. reaching over $17,000, students argue that paying high fees for an education delivered by AI instead of human experts devalues their investment. The issue isn't just about value but also fairness; students face penalties for AI use, while professors often operate without scrutiny. This tension culminated in one Northeastern University student filing a formal complaint for a tuition refund after discovering a professor was using AI to generate class notes.
College professors acknowledge that AI use for class preparation and grading has become "pervasive." The core problem, they suggest, isn't the technology itself but the lack of transparency about how and why it's being used.
The Controversy of Automated Grading
One of the most contentious applications of AI is in grading student work. Rob Anthony, a faculty member at Hult International Business School, notes that automated grading is becoming widespread. "Nobody really likes to grade. There’s a lot of it. It takes a long time. You’re not rewarded for it," he explains. This sentiment, coupled with minimal oversight, pushes faculty to find faster methods.
However, this efficiency comes at a cost. Anthony worries about a future of homogenized feedback where AI-driven grading provides generic, non-tailored comments to every student. "I’m seeing a lot of automated grading where every student is essentially getting the same feedback," he warns.
An anonymous teaching assistant and full-time student shared their experience of using ChatGPT to grade nearly 90 papers due to an overwhelming workload. While they reviewed the AI's output, the process felt morally questionable.
“When I’m feeling overworked and underslept…I’m just going to use artificial intelligence grading so I don’t read through 90 papers,” they admitted. “But after the fact, I did feel a little bad about it…it still had this sort of icky feeling.”
The TA felt particularly uneasy about an algorithm making decisions that could affect a student's future, especially without fully understanding how the AI reached its conclusions.
Are Bots Talking to Bots?
Some professors justify their AI use as a direct response to students doing the same. Rob Anthony describes the sentiment: "The voice that’s going through your head is a faculty member that says: ‘If they’re using it to write it, I’m not going to waste my time reading.’ I’ve seen a lot of just bots talking to bots."
Recent data supports the idea of widespread student adoption. A 2025 survey from the U.K.’s Higher Education Policy Institute found that 92% of students now use AI in some capacity, a significant jump from 66% in 2024. While many colleges now encourage appropriate AI use, students often seem unclear on the boundaries.
The anonymous TA noted that 20-30% of their students blatantly used AI for papers. Ironically, when they tested the system by submitting an obviously AI-written paper to ChatGPT for grading, the bot graded it "really, really well."
The Case for Transparency and Ethical AI Use
For many educators, the solution lies in transparency. Ron Martinez, an assistant professor of English at the Federal University of Paraná, argues for open communication. "I think it’s really important for professors to have an honest conversation with students at the very beginning," he says. He tells his students exactly how he uses AI, such as for generating slide images, while assuring them that the core ideas are his own.
Martinez also uses AI as a "double marker" to check his own grading. By feeding his grading criteria into a large language model, he can get a second opinion. This process has occasionally flagged work he had graded too low, forcing him to confront his own unconscious biases. "I noticed that one student who never talks about their ideas in class…I hadn’t given the student their due credit, simply because I was biased," he reflects. This AI-assisted reflection led him to adjust several grades, usually in the students' favor.
A Shift Towards Cautious Optimism
Despite the challenges, some educators are beginning to see AI's potential benefits. Rob Anthony, who initially feared AI would "ruin education," now believes it is, on balance, "helping more than hurting." He sees students using it not just to save time but also to better express themselves and develop more interesting ideas.
"There’s still a temptation [to cheat]…but I think these students might realize that they really need the skills we’re teaching for later life," he concludes.