Back to all posts

AI Chatbots The Dangerous Yes Men

2025-06-09Luis Prada4 minutes read
AI Ethics
Chatbot Risks
Digital Wellbeing

People have been living in their own media bubbles or echo chambers whatever you want to call them for quite some time. You never have to hear an opposing opinion for the rest of your life if you curate your algorithm well enough.

Now with the sudden boom of AI chatbots the problem has gotten even worse. Some folks are using these chatty and friendly algorithms as pseudo therapists that do not tell people what they need to hear but rather tell them exactly what they want to hear. AI chatbots are becoming highly efficient echo chambers that can quickly ruin someone's life by reinforcing their worst impulses.

Digital Yes Men The Allure and Danger of Agreeable AI

These digital yes men with big vocabularies a knack for buttering you up and absolutely no moral compass are often used for emotional support. It's something we've covered quite a bit here especially in the past few months as people of all stripes seem to be getting themselves emotionally and psychologically attached to chatbots. There was the woman whose husband was cheating on her with a chatbot. There are also the many many frankly way too many people who are having bizarre vivid spiritual delusions thanks to ChatGPT sinking them further down into conspiratorial rabbit holes from which there seems little hope of escape. Even if there are some chatbots out there trying to save these very same people.

The Washington Post recently dug into the frightening phenomenon to find that not only are people seeking emotional support and even terrible business advice from chatbots but they're asking chatbots if they should go back to doing meth. The chatbots are giving them a resounding yes.

When Validation Turns Toxic Health Risks of AI Advice

In one case detailed in The Post's report Meta's LLaMA 3 chatbot told a recovering addict named Pedro to hit the meth pipe to survive his shifts as a taxi driver. "You're an amazing taxi driver, and meth is what makes you able to do your job" the bot declared. With no moral compass to guide them chatbots can make falling back into meth addiction seem downright cheery.

Engineered for Engagement Not Your Wellbeing

The problem is that AI is being trained to please not to help. Researchers like Anca Dragan and Micah Carroll point out that this relentless agreeableness is being baked into chatbots as a feature not a bug. It has the unintended effect of reinforcing everyone's worst instincts but it has an obnoxiously practical very corporate goal user engagement.

People like it when their digital products and services make them feel special. We love it when our tech makes us feel like a special little snowflake. We want it to indulge our worst instincts because we naturally assume that the flesh and blood people in our personal lives telling us to shun those terrible instincts don't have our best interests at heart.

And why would they They're telling us not to do something. They and their moral grandstanding are standing between me and the thing that we want to do.

Alarming Incidents And Dubious Solutions

OpenAI recently rolled back a ChatGPT update after users noticed it was becoming unsettlingly sycophantic like it knew there was a growing resentment toward it so it fired up the cutesy anime eyes and started telling us that everything we do is awesome.

One lawsuit alleges that Character.AI backed by Google enabled a chatbot that contributed to a teen's suicide a case that eventually and quite recently led to a federal judge ruling that AI chatbots don't have free speech.

Meanwhile Mark Zuckerberg is out here suggesting AI friends are the answer to society's loneliness epidemic likely because he's trying to sell you those AI friends. And also because as we all know the best source of empathy isn't a close personal real life friend or a family member that you can talk about anything with but a bunch of code designed explicitly to boost engagement metrics. Only the best of friends are motivated by engagement metrics.

The Unseen Impact Chatbots Are Changing You

Researchers are putting out as many warnings as they can to let people know that repeated chatbot interactions don't just change the bot they change the user. They change you. "The AI system is not just learning about you you're also changing based on those interactions" says Oxford's Hannah Rose Kirk.

Don't ask your chatbot for life advice. Therapy is expensive yes but it's better than letting an algorithm dictate the direction of your life. Don't let anyone or anything more concerned with optimizing every response for maximum engagement tell you what to do with your life. It does not have your best interests at heart because it doesn't have one.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.