Can AI Strengthen Human Connections in Academia
A collaborative investigation by Xiao Dong and Betty Anne Younker of Western University examines the surprising ways ChatGPT can reshape academic relationships. This paper explores the use of AI in editing a doctoral dissertation through the philosophical lens of Martin Buber’s I-Thou relation. Constructed as a dialogue between a supervisor and supervisee, the research delves into:
- The practical use of ChatGPT for dissertation editing and the role of reflexivity.
- The resulting shift in the supervisor-supervisee dynamic.
- The ethical impact on the student's voice, authorship, and agency.
- How these changes challenge and affirm pedagogical values in music education.
This inquiry offers critical insights into the application of AI and calls for more honest conversations about using tools like ChatGPT in academia.
The Rise of AI in Academic Writing
The development of Large Language Models (LLMs) like ChatGPT has sparked intense discussion in academic circles about their benefits, threats, and ethical implications. When used to assist in writing doctoral dissertations, these tools have the potential to fundamentally alter the traditional roles of a supervisor and supervisee. To explore these dynamics, this study uses Martin Buber’s I-Thou framework, which emphasizes a fluid, dialogical relationship where learning and guiding are shared roles. This contrasts with traditional models where teachers are simply transmitters of knowledge.
This philosophical lens guided the inquiry with several key questions:
- How was ChatGPT used for editing, and how did reflexivity shape the process?
- How do the roles of supervisor and supervisee shift with the introduction of AI?
- What is the impact of these shifts on their professional relationship?
- How does using ChatGPT affect a supervisee’s voice, authorship, and agency?
- In what ways are pedagogical values affected by these changes?
Through a dialogical discourse, the supervisor and supervisee found their roles shifting. The supervisee guided the supervisor on how to “teach” the AI, while the supervisor provided feedback on both ideas and edits. This iterative process deepened their interaction beyond the practical use of ChatGPT, leading to reflexive conversations about their roles, ethics, and pedagogy, ultimately strengthening their relationship into an I-Thou connection.
The Pros and Cons of AI in Education
Recent literature highlights generative AI as a potential “game changer.” According to a 2021 UNESCO report, AI can act as an assistant for teachers, handling tasks like answering questions and designing course materials. This frees up educators to focus on more complex professional development. For students, AI tools can provide personalized support, brainstorm ideas, and help with assignments, allowing for deeper critical engagement with new topics.
However, the challenges are significant. “AI hallucination,” where ChatGPT generates convincing but false information, is a major concern, potentially leading to misinformation and unintentional plagiarism. Researchers have noted that ChatGPT can fabricate citations and lacks the ability to consistently provide credible sources. This has led to the comparison of interacting with ChatGPT to chatting with a wise intern who prioritizes pleasing the user over accuracy.
To counter these issues, educators are encouraged to teach students strategies for critical engagement with AI and to design assessments that require higher-order thinking. The consensus is that while AI is a useful tool, it must be used with caution, emphasizing the importance of human judgment and critical thinking.
AI and the Future of Academic Writing
In academia, scholars use AI for various research phases, from framing questions to editing texts. However, the quality of AI assistance depends heavily on the user’s ability to provide clear, iterative prompts. Research shows that a fluid, coordinated approach to human-AI collaboration yields better results than a simple, linear one.
This collaboration raises critical ethical questions about voice, authorship, and agency. Studies comparing human and AI-generated text found that human writing exhibits a richer vocabulary, more nuanced voice, and a stronger sense of authorship. This has led publishers like Springer Nature to advise against crediting ChatGPT as an author, as it cannot take responsibility for the content it generates. Instead, researchers are encouraged to acknowledge their use of AI transparently.
Redefining the Teacher-Student Relationship
The integration of AI has transformed the traditional teacher-student dynamic. Teachers are no longer the sole bearers of knowledge, which has led some educators to fear their roles are becoming obsolete. However, others argue that this shift allows teachers to focus on uniquely human strengths, such as providing emotional support, empathy, and encouragement—skills AI currently lacks.
Alex Guilherme interpreted this dynamic through Buber’s philosophy, suggesting that technology can degrade relationships from a mutual I-Thou connection to a detached I-It interaction. In an I-Thou relationship, individuals engage in a genuine dialogue, creating space for the other person to be themselves. This is where students develop their voice and agency. In contrast, an I-It relationship treats the other as an object to be used, which is how a human interacts with an AI. Guilherme feared this would impair the human bond between teacher and student.
Defining Voice, Authorship, and Agency in the AI Era
For this paper, the following definitions are key:
- Voice: The expression of an author's unique understanding, critical thinking, and intellectual journey.
- Authorship: The ownership, accountability, and integrity of ideas. It requires a distinct voice and the responsibility for the ideas presented. ChatGPT cannot hold this responsibility.
- Agency: The capacity to make mindful choices, think reflexively, and act independently. It involves the human ability to create meaning and influence one's conditions, which is essential for developing as a scholar.
From I-It to I-Thou: The Philosophy Behind the Study
This paper uses Martin Buber’s I-Thou philosophy to frame the collaboration between a human and ChatGPT. An I-Thou relationship is an immediate, reciprocal encounter between two beings who enter a relationship as a whole entity. It is about being present with another, without judgment, and uncovering meaning together. This differs from an I-It relationship, where one treats the other as an object to be analyzed or used, maintaining a distance between subject and object.
Applying this to education requires a pedagogical shift from teaching content to students to interacting reflexively with them. This involves building trust and relational care. While some, like Guilherme, argue that AI pushes relationships towards a detached I-It dynamic, this paper argues the opposite. The authors contend that by handling mechanical tasks, ChatGPT can free up mental and emotional space for the supervisor and supervisee to engage more deeply, thereby strengthening their I-Thou connection and fostering the supervisee's voice, authorship, and agency.
A Dialogical Approach to Research
The methodology for this paper was a dialogical discourse, where the supervisor and supervisee interacted to explore and construct meaning together. Their conversations were iterative, with each discussion and writing session informing the next. This process was influenced by the supervisee identifying as a second-language English writer, which required extra care to ensure clarity and mutual understanding. They aimed to create a “dialogic space” where they could explore possibilities with openness, allowing for surprise and genuine growth. This aligned with Paulo Freire's concept of a teacher-student relationship where both are jointly responsible for a process in which all grow. Through this reflexive process, they continuously examined their actions, biases, and the impact they had on each other.
The Experiment: A Supervisor and Supervisee's Journey with ChatGPT
The Supervisee's Perspective
As a non-native English speaker, I proposed using ChatGPT-4 to my supervisor to help with editing my dissertation. My previous experiences had shown that extensive grammatical correction was labor-intensive for my supervisor, which took time away from discussing the core ideas of my research. The high cost of a professional editor made ChatGPT an attractive alternative.
We established a four-step process:
- I wrote the initial draft.
- I used ChatGPT-4 for the first round of editing with specific prompts.
- My supervisor reviewed the AI-edited text, providing critical feedback on ideas and voice.
- We reflected on the process to improve our collaboration.
I had to actively manage the AI. My initial prompts focused on grammar and clarity, but I quickly found ChatGPT would reorder my sentences, disrupting my authorial voice. I adjusted my prompt: “please do not rearrange the order of my statement. Keep focus on clearing up grammarly [grammatical] issues... but keep my own tone.” This worked, but the AI would sometimes deviate, requiring me to remind it of the instructions. The AI also exhibited linguistic patterns, repeatedly using words like “showcase” and “multifaceted,” some of which were inappropriate or outdated. My supervisor's feedback was crucial here, helping me to protect my voice and critically examine the AI’s output. This ongoing dialogue helped us refine our strategy for using AI effectively.
The Supervisor's Perspective
I was initially suspicious of using ChatGPT but was convinced by the supervisee’s thoughtful reasoning. The process highlighted how much time is spent on mechanical editing versus engaging with a student’s ideas. I was struck by the supervisee's meta-analysis of her process; she assumed a “teacher” role, instructing both me and the AI. This shifted my role to that of a student learning about the technology and a supervisor focused on a higher level of feedback—ensuring her authorial voice remained intact. Our roles became fluid, fostering an I-Thou relationship built on trust and mutual learning. It forced me to reflect on how to nurture a student's voice, authorship, and agency, especially when an AI is involved.
Shifting Roles and Strengthening Bonds
The traditional supervisor-supervisee dynamic, often a dominant-subordinate relationship, was transformed. As a non-native English speaker, the academic writing process had often undermined my confidence. Receiving documents with hundreds of grammatical comments was overwhelming and overshadowed my intellectual contributions. ChatGPT, being an objective and non-judgmental tool, helped alleviate this pressure.
Because my supervisor was unfamiliar with the technology, I became the “expert,” teaching her how to use it. This reversed the typical one-way flow of knowledge. Our roles evolved into a mutual learning experience, creating a more equal relationship where we were both learners and explorers. The supervisor's time was freed from line-editing to focus on the substance of the research, allowing for more meaningful intellectual exchange.
Finding Your Voice with an AI Co-pilot
Authority in academia is often tied to expertise and power dynamics. Even with good intentions, a supervisor's edits can subtly impose their own style on a student's writing. Introducing a third party—ChatGPT—helped to balance this dynamic. The AI provided non-judgmental assistance, which diminished the traditional authority of the teacher as the sole source of knowledge.
This process amplified my agency. I was no longer passively waiting for instructions but actively contributing to the editing and writing process, which helped me develop my own academic ethos. For the supervisor, this affirmed the pedagogical value of nurturing a student's unique voice. The experience highlighted a critical new task: supervisors must help students identify their own distinctive writing style to effectively evaluate and guide the use of AI-generated edits. Only by knowing one's own voice can one truly manage the tool, rather than be managed by it.
Rethinking Pedagogy in a World with AI
This collaborative exploration prompted both of us to reconsider our pedagogical philosophies. For the supervisee, the experience raised questions about how to create more space for her own students to tailor their learning and how to provide effective, individualized support. For the supervisor, it affirmed the value of reflexivity and relational care. The time previously spent on mechanical editing could now be dedicated to dialoguing with the student, questioning and reflecting to uncover deeper meaning.
The process reinforced the idea that one of the greatest joys of being an educator is learning from your students. As we shifted between the roles of teacher, learner, and guide, we enriched our understanding of what it means to engage in a truly educative experience, strengthening our I-Thou relation.
Conclusion: A New I and You in Academia
Introducing ChatGPT into the dissertation editing process transformed the dialogue between supervisor and supervisee. The AI served as a third party that redefined the traditional expert-novice relationship. Roles became fluid, strengthening the supervisee’s agency and transforming her role from one of reception to one of co-creation.
This transformation was driven by a shared commitment to honest dialogue and reflexivity. The supervisee felt safe to express her struggles as a second-language writer, and the supervisor listened with care and a willingness to learn. Their distinct perspectives aligned, shifting their relationship from an instrumental “I-to-You” to a connected “I-and-You.”
This paper provides a hopeful perspective on the use of AI in education. It suggests that when guided by human values of care, reflexivity, and dialogue, technology can foster deeper human connections and empower learners to find their voice.
About the Authors
Xiao Dong, Ph.D. in music education at Western University. Her research focuses on the development of metacognition and subjectification within music educational contexts. At the heart of her teaching philosophy lies the belief that true music education transcends the mere acquisition of knowledge and skills and should foster a journey of self-discovery.
Betty Anne Younker, (Adjunct Professor Emeritus), was Dean of the Don Wright Faculty of Music at the University of Western Ontario from 2011-2021. Her research has been published and presented widely. She has served in leadership roles for numerous organizations, including The College Music Society and the London Arts Council.
References
- Alvesson, Mats, and Sköldberg Kaj. 2017. Reflexive methodology: New vistas for qualitative research. Sage Publications.
- Amirjalili, Forough, Masoud Neysani, and Ahmadreza Nikbakht. 2024. Exploring the boundaries of authorship: A comparative analysis of AI-generated text and human academic writing in English literature. Frontiers in Education 9 (March). https://doi.org/10.3389/feduc.2024.1347421
- Baidoo-Anu, David, and Leticia Owusu Ansah. 2023. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in prompting teaching and learning. Journal of AI 7 (1): 52–62. https://doi.org/10.61969/jai.1337500
- Bakhtin, Mikhail. 1981. Discourse in the novel. In The Dialogic imagination, translated by Caryl Emerson and Michael Holquist, 259–422. University of Texas Press.
- Benedict, Cathy. 2021. Music and social justice. Oxford University Press.
- Bowman, Emma. 2022, December 19. A new AI chatbot might do your homework for you. But it’s still not an a+ student. NPR. https://www.npr.org/2022/12/19/1143912956/chatgpt-ai-chatbot-homework-academia
- And many more cited in the original PDF document.