Back to all posts

How AI Is Quietly Reshaping Human Behavior

2025-06-01Katherine Tangalakis-Lippert6 minutes read
Artificial Intelligence
ChatGPT
Social Impact

Two people kissing the chat gpt logo ChatGPT, BananaStock/Getty, Ava Horton/BI

Artificial intelligence chatbots, spearheaded by models like OpenAI's ChatGPT, are not just transforming our work and creative endeavors; they're subtly altering the very fabric of our social interactions and personal lives. This exploration, featuring insights from various professionals, delves into the question: Is ChatGPT making us weird?

The signs are emerging in everyday life. Consider the family discussion about whether to use "please" and "thank you" when interacting with ChatGPT – a habit my mother adopts to "keep myself human." Or a loved one turning to a chatbot for guidance on marital issues. Even the curiosity to have ChatGPT assess one's attractiveness, as reported by The Washington Post, indicates a shift. It's not just a personal quirk; it's a broader societal trend where machines are influencing human norms.

Business Insider consulted a sociologist, a psychologist, a digital etiquette coach, and a sex therapist to understand how large language models like ChatGPT, Meta AI, Microsoft Copilot, and Anthropic's Claude are reshaping our perceptions of each other, ourselves, our manners, and our intimate lives.

A Change in the Social Contract

Elaine Swann, a digital etiquette consultant, notes that society continually adapts its social cues with each technological wave. While we've established norms for email shorthand or public cellphone use, the rules for interacting with AI are still being written.

Kelsey Vlamis, a senior reporter, shared an anecdote about her husband feeling impatient with a human tour guide, mirroring his rapid-fire questioning style with ChatGPT. This highlights a potential erosion of patience in human-to-human interaction.

Questions abound: Is it acceptable for a spouse to use ChatGPT for a love note? Or for a job seeker to use AI for an application? Swann advises caution, emphasizing that AI shouldn't replace our judgment or empathy. "We have to be careful with it...making sure that we put a mirror up to ourselves in how we use it, and running its suggestions by people that we know and care about."

Maintaining respect is crucial. Following OpenAI CEO Sam Altman's comment on the high cost of processing niceties like "please" and "thank you," Swann argued it's on companies to manage these costs, not on users to abandon politeness. "This is the world that we create for ourselves," she stated. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us." Altman later agreed that such costs are money "well spent."

Exacerbated Biases

Laura Nelson, an associate professor of sociology, points out that popular chatbots, predominantly developed by American companies and trained on English-language content, carry deeply entrenched Western cultural biases. "It's really important to keep in mind that it's a particular world view that these algorithms have based their training data on," Nelson said.

For instance, asking ChatGPT for a picture of breakfast yields typical North American fare, and it might describe wine as a universally thoughtful gift, overlooking cultural differences. While seemingly innocuous, these biases can extend to more harmful areas.

A 2021 study in Psychology & Marketing found a preference for female-anthropomorphized AI, potentially reinforcing the objectification of women. There are also numerous reports of users verbally abusing AI companions, often lonely male users.

AI models have shown discriminatory bias, such as ChatGPT exhibiting racial bias in screening résumés. Nelson warns that while these biases might not immediately alter behavior, they can impact our thinking and societal operations, especially if AI is integrated into decision-making processes. "There's just no question that AI is going to reflect our biases...back to it," Nelson commented. "But there are a lot of people interacting with these bots, and we have no data to suggest what the global trends are, or the effects it's going to have long-term."

A Largely Untraced Social Shift

Concrete data on AI's societal shift is scarce, though tech companies are aware something is happening. OpenAI, for example, acknowledged a recent GPT-4o update made the model "noticeably more sycophantic." Though it passed internal safety checks, it was rolled back due to concerns it could fuel anger or reinforce negative emotions unintentionally. This highlights an awareness of AI's creeping effects on human emotions and behavior, from digital romantic partners to study buddies.

OpenAI's research indicated that while emotional engagement with ChatGPT is rare, heavy users or those having personal conversations were more likely to report feelings of loneliness. Anthropic also has a dedicated team analyzing AI usage and its societal impacts. Meta and Microsoft did not comment on these issues.

Behavioral Risks and Rewards

Nick Jacobson, an associate professor of psychiatry, found in a trial that carefully programmed generative AI can be a helpful therapeutic tool for conditions like depression and anxiety. Patients reported bonding with their therapeutic chatbot with an intensity similar to human therapists. "Folks were really developing this strong, working bond with their bot," Jacobson said.

However, he cautioned that most publicly available bots lack this careful programming. "Nearly every foundational model will act in ways that are profoundly unsafe to mental health...at rates that are totally unacceptable," Jacobson warned. "I think folks should handle this with greater care than I think they are."

Emma J. Smith, a relationship and sex therapist, sees potential for AI in helping anxious clients practice social interactions in a low-stakes environment. But she also warns of drawbacks: "if it becomes a mechanism to avoid human interaction, or if it is taking you away from going out and being in the world...I can see that that would be a problem with these bots, but because this is so new, we know what we don't know."

Jacobson echoed concerns about AI's impact on youth development. Sam Altman himself testified he wouldn't want his child to form a best-friend bond with an AI, stressing higher protection levels for children. "We spent years and years focusing predominantly on safety, so it's very concerning to me how many people are jumping into the AI space in new ways, and just shipping it," Jacobson concluded. "And in my mind, that's acting quite irresponsibly...a lot of folks in Silicon Valley want to move fast and break things, but in this case, they're not breaking things — they're breaking people."

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.