Professors AI Use Sparks Debate Amidst Academic Crisis
Recent discussions have brought AI's role in education to the forefront. A New York Magazine story titled "Everyone Is Cheating Their Way Through College" detailed how undergraduates are misusing ChatGPT. Adding to this, The New York Times shared its own shocking reveal about AI malfeasance in the classroom: it turns out that professors are abusing generative AI chatbots too.
The Student's Dilemma: Tuition Costs and AI Content
The Times piece centered on a complaint from a senior at Northeastern University. This student, whose initiative I respect, found that one of her instructors was using ChatGPT to supplement course materials. This raised two valid concerns for her. Firstly, the professor's syllabus explicitly prohibited "the unauthorized use of artificial intelligence or chatbots." Secondly, with tuition being excessively high, why should a student pay around $8,000 for a college class that is partly generated by a program accessible to anyone, scholar or not?
Higher Education's Crisis of Confidence
These revelations about professors reportedly engaging in questionable practices are emerging as faith in American higher education plummets to its lowest point in decades. They also coincide with the Trump administration's unprecedented attempts to penalize ideologically noncompliant schools by withholding federal funds. Narratives about professors using AI to create lectures or, distressingly, to grade students' work, certainly do not help improve public perception.
Unpacking the Pressures on Professors Today
However, it is easy to criticize professors without understanding the complexities of their current labor. Interestingly, AI programs themselves seem fond of words like "intricacies" and often use em dashes. So let's delve (another AI favorite) into some of these complexities. (I've omitted the telltale invisible spaces often found in student essays, which can indicate to professors that ChatGPT, not Suzie Sophomore, completed the assignment.)
The Eroding Foundation: Tenure and Academic Labor
The key complexity to consider is that the American professoriat, as traditionally known, is under threat. The institution of tenure, which guarantees lifetime employment for scholars in return for proven research and, ideally, teaching accomplishments, has weakened significantly. In 1976, 56% of professors nationally were on the tenure track. That figure is now down to about 24% and continues to fall. Skeptics like myself, the type of traditional academics who instinctively resist AI in the classroom, predict that tenure will largely disappear from most schools within a few decades.
The decline of tenure is directly related to what is termed "the casualization of academic labor." The vast majority of professors in the United States have become overworked, stressed components of an intellectual gig economy. They teach increasingly larger classes for progressively lower wages, with diminishing job security and no protections for academic freedom. It is under these challenging circumstances that I suggest we offer some understanding to academics; if they use AI for tasks like grading papers or preparing slide decks, it is often because their classes are overcrowded and their pay is insufficient.
Administrators: Caught Between AI's Promise and Peril
Now that professors have been publicly accused of misconduct, the question arises: What will college administrators do? These are the same administrators responsible for the aforementioned casualization of the scholarly workforce (it's no surprise that professorial trust in our leadership is also at an all-time low).
These administrators also display a notably inconsistent approach to AI innovations. On one hand, they are captivated by AI's "efficiency propositions." These innovations promise cost reductions, allowing them to dismiss many human employees whose skills are supposedly becoming digitally replaceable (such as librarians, grant writers, and curriculum developers). Then there are the "synergies." Universities are widely forming lucrative partnerships with AI companies and generally embracing this impressive new technology that reduces redundancy.
On the other hand, institutions of higher education are committed to long-standing academic integrity protocols. This includes the traditional idea that students (and professors) should learn to think for themselves. Some argue that teaching young people this skill is the fundamental democracy-enhancing purpose of higher education. Without it, college might seem to be just about fraternities and football.
The Unchanging Core: Fostering Critical Thought
As a professor, I have never used generative AI chatbots. This is because I believe that if you teach in the humanities or softer social sciences, your goal is to help students develop into critical, analytical, and ultimately thoughtful individuals. To achieve this, they must learn how to think. The thinking process, with all its framing, failing, flailing, reflecting, and encountering dead ends, cannot be outsourced to some impersonal research assistant like Claude.
This is why my colleagues' use of these programs in the classroom is so concerning. Nearly every scholar working today was educated before these programs existed. We had the privilege of developing functional analytical minds because our teachers compelled us to do so. Why would we want to deny our students the chance to do the same by letting them delegate their thought processes to lines of uninspired code?
A Difficult Balance: AI, Ethics, and Academic Reality
By the same token, nearly every scholar working today was also born at a time when it was unimaginable that a technology created by a few, and enriching a few, could eliminate entire professional guilds within years. So, while I personally wish professors would not yield to the superficial appeal of AI, I cannot entirely blame them, at least not those who have been disadvantaged by an economy that so greatly undervalues their unique skills, if not their very humanity.