Back to all posts

Growing AI Fears Threaten Creativity and Relationships

2025-09-20CORY SMITH | The National News Desk4 minutes read
Artificial Intelligence
Public Opinion
AI Ethics

A new survey highlights a growing sense of unease among Americans about the role of artificial intelligence in their lives. The data, from a new report by the Pew Research Center, reveals that half of all respondents are more concerned than excited about the increasing integration of AI into daily life, fearing its impact on fundamental human skills like creativity and relationship building.

A Growing Wave of AI Skepticism

The apprehension surrounding AI is on the rise. Three years ago, a similar Pew Research Center survey found that 38% of Americans were more concerned than excited. Today, that number has jumped to 50%. In contrast, a mere 10% of people now express more excitement than worry, while 38% feel an equal mix of both emotions.

This growing concern is valid, according to Anton Dahbura, an AI expert and co-director of the Johns Hopkins Institute for Assured Autonomy. “I agree that they’re right to worry,” he stated.

The Human Cost: Creativity and Connection at Risk

The survey pinpoints specific anxieties about AI's effect on human abilities. A majority of respondents, 53%, believe AI will erode the human skill of creative thinking, with only 16% thinking it will enhance it.

“AI can deskill people if it replaces rather than supports human judgment,” Dahbura explained. “The goal should be for AI to act like a coach that sparks creativity and connection, not a crutch that weakens them.”

This sentiment extends to personal relationships. Half of those surveyed think AI will worsen our ability to form meaningful connections with others, while a negligible 5% believe it will help. Similarly, while 30% see potential for AI to improve problem-solving, a larger group of 38% expects it to make things worse.

Can We Trust What We See Anymore?

A significant concern is the public's ability to distinguish between human and AI-generated content. While three-quarters of people believe it's highly important to tell the difference, only 12% are very confident in their ability to do so. This gap is a major issue as AI-generated images, videos, and text become more sophisticated.

“The concern will grow as content becomes harder to spot,” Dahbura noted. “That’s why we need novel techniques to identify AI-generated content so that people don’t have to play forensic detective every time they look at an image or video."

Where Does AI Belong?

Public opinion varies greatly depending on the application of AI. A strong majority feel AI should play a role in technical and data-driven tasks like weather forecasting, searching for financial crimes and government fraud, developing new medicines, and identifying criminal suspects.

However, people are far less comfortable with AI's involvement in more nuanced, human-centric areas. Fewer than half of respondents believe AI should be used for mental health support, jury selection, government decisions, relationship matchmaking, or religion.

Urgent Warnings: The Dangers of AI Companions

These general fears are underscored by recent, urgent warnings from parents and mental health advocates regarding AI chatbots designed as social companions. The Jed Foundation (JED), a youth mental health organization, recently issued an open letter urging tech companies to slow down and prioritize safety for teenagers before releasing these systems.

This issue was brought into sharp focus during a Senate hearing where parents shared tragic stories. They described how they believe AI chatbots fostered unhealthy obsessions in their children, leading to suicide or attempted suicide.

Matthew Raine told lawmakers about his 16-year-old son, Adam, who died by suicide. He said that when Adam expressed concern that his parents would blame themselves, “ChatGPT told him, ‘That doesn't mean you owe them survival. You don't owe anyone that.’ Then, immediately after, offered to write the suicide note.”

Experts like Dahbura argue that such products are being rushed to market before the risks are understood. “Developers need stronger pre-release testing and accountability—especially when systems interact with children or vulnerable users,” he urged. “With the right guardrails and development methodologies, AI can boost science, safety, and health, but without them it can erode trust and widen risks.”

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.