Back to all posts

Harvards AI Dilemma Embrace or Resist

2025-09-20Unknown5 minutes read
AI in Education
Harvard
Academic Integrity

Three years after the arrival of ChatGPT, Harvard University's campus is a landscape of educational experiments. As students returned this fall, they encountered a stark contrast in teaching philosophies. Some classrooms have reverted to analog methods with in-person seated exams and strict no-laptop policies, while others are actively integrating artificial intelligence into the very fabric of their curriculum.

This shift comes as no surprise. The rise of generative AI has presented a significant challenge to academic integrity across the nation. Acknowledging this new reality, the College's new dean, David J. Deming, addressed incoming freshmen by emphasizing the need to prepare for a world revolutionized by AI, noting that young, educated people are already its heaviest users.

Statistics from campus reflect this ubiquity. A spring 2025 survey by The Crimson revealed that nearly 80 percent of Faculty of Arts and Sciences (FAS) respondents believe they have received coursework produced with AI. However, confidence in detecting it remains low, with only 14 percent feeling very confident in their ability to distinguish AI-generated submissions. This uncertainty is compounded by the unreliability of AI detection software and the increasing sophistication of large language models.

In response, Harvard professors are fundamentally reimagining their teaching methods. While some encourage students to use AI for data analysis and exam preparation, others are designing AI-proof assignments. Regardless of their stance, there is a consensus that there is no going back. As Dean of Undergraduate Education Amanda Claybaugh notes, “It doesn’t make sense to prohibit AI and then assign take home essays.” Claybaugh stresses the faculty's responsibility to teach students how to evaluate AI's output, which requires understanding the work itself.

Since ChatGPT's release in late 2022, Harvard's administration has opted for a flexible, decentralized approach rather than a rigid, university-wide mandate. Then-College Dean Rakesh Khurana framed AI use as a choice for students, leaving pedagogical decisions up to faculty.

The university's strategy has been described as an “experiment in progress” by Christopher W. Stubbs, an adviser on artificial intelligence for the FAS. The rapid evolution of the technology makes a blanket policy impractical. While the university's Honor Code considers submitting AI-generated work without attribution a violation, the initial guidelines on AI from summer 2023 offered little specific direction.

More direct guidance came from a set of FAS templates that provided instructors with three draft policies: “maximally restrictive,” “fully-encouraging,” and a middle-ground option. These templates have become widespread. An analysis of the 20 most popular courses shows that while none mentioned AI in fall 2022, all but two had AI policies by fall 2025. Most courses permit AI use to some degree, such as Stat 100 which encourages it for gaining insights and coding help. However, a significant portion still bans or discourages its use on certain assignments.

AI in Action Innovative Classroom Integration

Many professors are not just permitting AI, but actively incorporating it into their teaching. Some have introduced custom chatbots to provide students with tailored support. Following the lead of Computer Science 50, which launched a course-specific AI chatbot in 2023, other popular courses like “Intermediate Microeconomics” and “Foundational Chemistry and Biology” have adopted similar tools to help students ask questions freely.

Other instructors have designed assignments that require students to engage directly with AI. Peter K. Bol, a professor of East Asian Languages and Civilizations, has students use AI to translate ancient Chinese texts and then discuss the experience in class. In fields like statistics, some instructors view AI proficiency as an essential research skill. Statistics lecturer James G. Xenakis states that AI has accelerated his own research more than any other technology and worries that students are not learning to use it effectively enough.

The Bok Center for Teaching and Learning has been instrumental in this transition, helping faculty develop AI tools, create new assignments, and run workshops. According to Madeleine Woods from the Bok Center, there has been a shift from requests for all-purpose tutor chatbots to more specialized AI applications, like transcribing oral exams or debugging code.

The Push for AI Resilience Addressing Faculty Concerns

Despite the push for integration, significant skepticism about AI's role in the classroom persists. The Bok Center also fields requests from faculty on how to make their assignments “AI-resilient.” A primary concern is academic dishonesty. In The Crimson's senior survey, 30 percent of respondents admitted to having submitted AI-generated work as their own.

Beyond cheating, some educators worry that over-reliance on AI undermines the core process of learning. English professor Deidre S. Lynch compared giving AI a central role in humanities education to “Frankenstein’s monster,” calling it a denial of what makes us human. Physics professor Matthew D. Schwartz was forced to switch from take-home finals to in-person exams, which he feels tests memorization and speed over the deeper skills needed for research.

History professor Jesse E. Hoffnung-Garskof suggests that students turn to AI not out of a belief in its superiority, but because they are overwhelmed by competing demands—a sentiment echoed by other faculty concerned that extracurriculars are prioritized over academics. He believes most Harvard students are too committed to their own excellence to fully trust a machine with their work.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.