Educators Adapt As Student AI Use Surges in Classrooms
The integration of Artificial Intelligence into educational settings is no longer a future prospect but a current reality, bringing with it a wave of concerns from teachers, particularly regarding student engagement and academic honesty. Students are increasingly leveraging AI tools for their assignments, prompting educators to rethink their strategies. Some teachers are exploring AI's potential to design assignments that are more difficult to game, while others are grappling with policy adjustments to address this technological shift. This article delves into how three educators are navigating these changes.
Three teachers spoke to BI about how they're adjusting classroom policy in light of AI. Photo: skynesher/Getty Images
The AI Impact in Modern Classrooms
AI has made its way into the classroom. Along with it, concerns from teachers about student apathy have grown. Gary Ward, a physics, economics, and business teacher at Brookes Westshore High School in Victoria, British Columbia, shared his observations with Business Insider. "Some of the ones that I see using it all the time — I think if it wasn't there, they would just sit there looking blindly into space," Ward remarked.
Since ChatGPT's release in 2022 and the widespread adoption of similar generative AI tools, fears about academic plagiarism have significantly increased. Educators have had to react swiftly, adapting their curricula to either embrace or counter a technology that could be both a teaching aid and a "homework cheating machine."
Ward, a teacher with about thirty years of experience, noted that student AI usage increased incrementally until this year when it simply "exploded." He stated, "Literally, all students are using it this year." To prevent students from using AI for all assignments, Ward has started using it defensively. He has asked ChatGPT to help him create work that would be more challenging for students to complete using an LLM. "I just started it with a conversation in ChatGPT, and sort of iteratively went through — explained in my prompt what was happening, and said, 'This is what I want,'" Ward explained. "It told me, 'These are things you can do to make it harder for students to be able to just answer with some large language model.' And typically, it's making it more personalized."
At Manchester Metropolitan University in Manchester, England, Richard Griffin, a lecturer in the business faculty specializing in project management and portfolio development, described a similar approach.
Photo taken on September 29, 2020, shows part of the Manchester Metropolitan University campus in Manchester, Britain. Photo: Xinhua/Jon Super via Getty Images
The university has developed an in-house system. Educators can input their assignments into this tool, which then assesses how difficult it might be to cheat using AI and offers recommendations to make it more AI-resistant. "The IT department have done their own tool which assesses how AI safe it is, or AI savvy it is, and will give you a bit of a grade to say, 'Well, really, you will need to adjust some of this,'" Griffin said. "It doesn't give us specific information, but it does give you a bit of a scroll to say, No, this isn't very safe. You need to add some deeper challenges here, or you need to make this more personal, etcetera."
A shift back toward analog assignments
According to Ward, the most effective defense against AI so far involves turning back the clock. "I've tried to sort of shift back toward some handwritten assignments, instead of having them do it on the computer," Ward said. "That way, I can tell this is how they're writing. I know it's theirs."
Even if Ward cannot use analog methods for all coursework, it helps him establish a baseline for each student's writing, making it easier to identify synthetically produced work later. "Now, yeah, it's expensive and it takes a lot of time to grade them, but I think that needs to continue," he added.
Paul Shockley, an assistant professor at Stephen F. Austin State University in Nacogdoches, Texas, believes that the goal of a classroom is to empower students with foundational skills like research, deep thought, and comprehension. He argues that by substituting AI for traditional study processes, many students are failing to meet these benchmarks. "Many students today are using AI as a way of fulfilling their assignments, and it is creating a loss of critical thinking, a loss of originality, a loss of discernment, a loss of personal reflection, and so on," said Shockley, who teaches philosophy and religious studies.
Shockley was an early AI adopter, experimenting with LLMs shortly after ChatGPT's launch. He anticipates that the technology will not only persist but also improve exponentially, leading him to believe it was crucial to help students build a healthy relationship with it. "My mindset on the topic, since AI has emerged, has shifted, moved like a pendulum from fascination to fear, given how it may be used," Shockley stated. "But my fascination with AI is rooted in what it may be able to ameliorate... I decided that I would be open to using AI and my pedagogy in a Socratic approach."
He initially designed an assignment for his undergraduate courses encouraging students to dialogue with an LLM and analyze its output, hoping they would learn to ask smarter questions and develop a healthy skepticism of AI. However, he has since discontinued this assignment and no longer permits any AI use in lower-level classes due to widespread cheating. One instance involved a student submitting a paper citing a hallucinated quote from a book Shockley co-authored. "The use of AI in the classroom for me as a philosopher is limited to inquiry among senior-level students doing research where they have maturity," Shockley said. "They have the chance to grow and so, and become equipped with critical thinking skills for themselves."
Some assignments are naturally more AI-resistant
Although Shockley still assigns research papers, he also tries to use "experiential" assignments whenever feasible. For instance, in undergraduate environmental ethics and religious studies courses, he has sent students to visit local nature spots or religious sites. He aims to engage students by "hooking" them—connecting them more personally to the subject matter before they interact with it in traditional ways. This, he hopes, makes them less likely to use AI. Additionally, he has started adding reflective components to assignments that could easily be gamed by AI.
"What is it that students want? What is it that people want to experience these days?" he pondered. "What is it that young people want to experience these days, right? They want to have phenomenal experiences, you know, transformative experiences, cool experiences, and so, how can I harmonize those things together?"
Certain disciplines are inherently more insulated against AI cheating because they lend themselves better to project-based assignments. In Richard Griffin's case, many of his business courses require actual interaction with a real-world client. "We're challenging them with quite difficult tasks out in the real world to deliver projects for clients, you know, and there's a huge variety of expectation and understanding, both from the clients' perspective, but also from our sort of undergrads as well," Griffin explained.
The Evolving Landscape of Student Assessment
Much like Shockley, Griffin is focusing on incorporating reflection into his curriculum, hoping that layered steps will encourage deeper thinking. "I'm using projects and portfolios, so people are out in the real world. We're also relying very much on reflective aspects of that," Griffin said. "So they'll deliver a project with a client. If you're going to use AI and tell the client some really tough information, they're not going to be particularly happy. And then that reflective element means that they really have to delve deeper and give us some honesty, which wouldn't normally be there in normal sort of assignments or assessments."
A shift toward oral assessments and discussion-based assignments is also likely as AI continues to develop, according to Griffin. "So assessments, I don't know whether I'd say they're going to become harder," he stated. "They'll certainly become more focused. I think we need to accept that. We maybe can't teach as broad a topic as we'd like to, but we can certainly teach criticality."