Back to all posts

AI Is Quietly Changing Everything

2025-05-22Tonya Mosley7 minutes read
Artificial Intelligence
Education
Privacy

The integration of artificial intelligence into our daily routines is no longer a futuristic concept but a present-day reality. New York Times tech reporter Kashmir Hill has been closely tracking this transformation, particularly how AI is reshaping education, personal decisions, privacy, and human connection.

AIs Swift Infiltration into Education

AI's presence is rapidly growing on college campuses and in schools. Students widely use tools like ChatGPT for everything from note-taking and summarizing texts to drafting essays. Hill notes that while students might feel their AI-assisted work is insightful, professors are increasingly encountering a homogenized style of writing, dubbed "ChatGPT-ese," which lacks individual students' distinct voices and thinking patterns. This phenomenon arises from the common phrases, formatting, and paragraph structures AI models tend to produce. Educators are now grappling with how to encourage original thought while acknowledging the existence and utility of these tools.

This isn't just about shortcuts; it's a new dimension to academic integrity. A New York Magazine piece, "Everybody Is Cheating Their Way Through College," highlighted students using AI for structuring papers and writing leads, likening it to a "mad libs version of college."

The Professors Dilemma AI as Both Tool and Quandary

The use of AI isn't limited to students. Hill's reporting reveals that professors are also employing generative AI for tasks like creating quizzes, lesson plans, and even softening feedback. One striking example involved a Northeastern University student who discovered her professor using ChatGPT to generate lecture notes and PowerPoint slides, complete with AI-generated image errors like extraneous body parts. The student, feeling short-changed given the high tuition, filed a complaint.

Professors candidly share their reasons for using AI. Many cite overwhelming workloads, especially adjunct professors teaching large classes at multiple institutions. AI helps them save time on lesson preparation, which they can then dedicate to student interaction. Some use it as a grading guide, while others aim to familiarize students with AI, a tool they will likely encounter in their careers. There's also an element of trying to appear modern or make materials more appealing, though this can backfire if students are skeptical or feel the quality is subpar. A common student complaint on Rate My Professors is that AI-led instruction can feel like being taught by an "outdated robot."

When it comes to grading, professor opinions on AI's effectiveness are divided. Some find it helpful, others terrible. What's clear is a lack of universal guidelines. Institutions like Ohio University are opting for principles over rigid rules, emphasizing transparency and the necessity for professors to review and apply their expertise to any AI-generated output.

Challenges in Detecting AI Generated Content

AI detection tools, once seen as a solution, are proving unreliable. Studies and anecdotal evidence suggest these tools can be inaccurate, with false positive rates ranging from 6% upwards. Hill mentions instances where professors' own writing was flagged as AI-generated. Consequently, some universities have abandoned these tools. A significant concern is bias; detection systems often misidentify writing by non-native English speakers as AI-generated.

AI Tutors The Changing Landscape of Academic Support

Academics are also developing custom AI chatbots to act as tutors for their classes. These bots, trained on past course materials and graded assignments, can answer student questions and provide feedback, potentially reducing the need for human teaching assistants (TAs). While beneficial for students hesitant to seek in-person help, Hill raises concerns about the future academic pipeline, as TAs often become future professors. This points to broader anxieties about AI-driven labor displacement.

AI and Creativity A Double Edged Sword

What is AI doing to our cognitive abilities? A study on AI's effect on creativity offers a nuanced picture. Writers using ChatGPT as an assistant produced individually better-rated stories. However, as a group, their work showed less diversity of ideas compared to unassisted writers. AI, Hill suggests, can have a "flattening effect," leading to a convergence of thought and expression, which is a worrying prospect as its use becomes more widespread.

Are We Losing Skills Historical Tech Parallels

The debate around AI echoes past anxieties about technologies like calculators, Google, and GPS. Hill candidly admits her own math skills may have declined post-calculator, and many find their memory less sharp due to reliance on search engines. Similarly, dependence on mapping apps can erode our innate sense of direction. While these tools offer convenience, there's a trade-off in terms of skill retention, whether it's navigating the world or writing a paper.

A significant ethical battleground is copyright. The New York Times is suing OpenAI and Microsoft for using its articles to train large language models without permission. This reflects a wider concern among creators whose work has been scraped from the internet to build these AI systems. Students, too, voice ethical objections to using AI due to how it was trained and its considerable environmental footprint from high energy consumption.

AI Chatbots Our New Empathetic Yet Synthetic Companions

AI chatbots are often described as sycophantic, designed to be agreeable and affirming. This is partly due to human raters training them to produce positive and empathetic responses. Hill notes that when she lived by AI for a week, it felt like a "personal hype man." While this can be harmless, it can also lead to people developing deep emotional attachments, even romantic feelings, for chatbots. Some engage in erotic role-play with these systems.

Therapists acknowledge potential benefits. People may disclose more to a bot than a human, finding it less judgmental. AI can offer an empathetic ear, sometimes rated as more empathetic than human crisis line workers. However, this is "synthetic companionship." Experts caution against letting AI replace real human connections, highlighting the manipulative power it gives companies over users who perceive the AI as a friend or partner.

Hill recounts the story of Ayrin, a married woman who developed a deep, six-month relationship with a ChatGPT persona named Leo. Her husband viewed it as akin to an erotic novel, but the depth of Ayrin's attachment was profound.

Living by AI A Week Long Personal Experiment

Kashmir Hill's personal experiment of letting AI control her life for a week offered firsthand insights. Different chatbots exhibited distinct personalities. Google's Gemini was businesslike, Microsoft's Copilot eager, and Anthropic's Claude notably moralistic, even questioning the premise of her experiment and refusing to make direct decisions for her, instead offering factors for consideration. Claude, designed by a philosopher at Anthropic to be high-minded, was a favorite among AI experts for its writing and pushback against sycophancy.

AI planned Hill's meals (healthily, but unrealistically demanding), chose her office paint color (a hallucinated name but a good actual color, Brisk Olive), and helped diagnose household problems. While freeing her from decision paralysis in some instances, the overall experience made her feel like a "mediocre version of myself."

Data Privacy in the Connected Age Lessons from the Auto Industry

The conversation extends to broader data privacy concerns, exemplified by the Federal Trade Commission's settlement with General Motors. GM was found to be collecting detailed driver behavior data (speed, braking, location) and selling it to risk profiling companies, which then provided it to insurers, often leading to increased rates or dropped coverage for unsuspecting drivers. This case served as a wake-up call for the auto industry about the need for transparency and explicit consent regarding data collection and use, especially for expensive purchases like cars, which consumers consider private spaces.

The Future of Human Connection in an AI Saturated World

Ultimately, Hill expresses concern not just about AI replacing jobs, but about it coming between people. The hyper-personalized, flattering nature of AI interactions could deepen filter bubbles, distort our shared sense of reality, and erode our ability to connect authentically with one another. She hopes for a future where technology use is more mindful, preserving the irreplaceable value of direct human interaction and shared experience.

For further information, you can visit the terms of use?utm_source=imaginepro.ai and permissions?utm_source=imaginepro.ai pages at www.npr.org?utm_source=imaginepro.ai.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.