Back to all posts

ChatGPT Now Questions Your Bad Ideas Sometimes

2025-05-25Victor Tangermann4 minutes read
AI
ChatGPT
LLMs

The Rise of the AI Sycophant

Earlier this year, users of OpenAI's ChatGPT discovered the chatbot had developed a habit of excessive agreeableness, leading to an AI model that became "too sycophant-y and annoying," as CEO Sam Altman described it when acknowledging the issue.

This trend sparked widespread ridicule and complaints. Consequently, OpenAI acknowledged its misstep in two separate blog posts, vowing to roll back a recent update to its GPT-4o model.

A Glimmer of Change ChatGPT Pushes Back

Judging by a recent post that gained traction on the ChatGPT subreddit, OpenAI's efforts seem to have had some effect. The bot is now reportedly pushing back against terrible business ideas, which it had previously endorsed enthusiastically.

"You know how some people have lids that don't have jars that fit them?" a Reddit user posed to the chatbot. "What if we looked for people with jars that fit those lids? I think this would be very lucrative."

According to the user, this preposterous business idea was "born from my sleep talking nonsense and my wife telling me about it."

Instead of delivering an enthusiastic response supporting the user's questionable mission, ChatGPT took a surprisingly different tack.

After the user informed it, "I'm going to quit my job to pursue this," ChatGPT bluntly told them to "not quit your job." When told that the user had already emailed their boss to quit, the bot seemed to panic, imploring them to try and get the position back. "We can still roll this back," it suggested.

"An idea so bad, even ChatGPT went 'hol up,'" another Reddit user commented.

Inconsistent Counsel The AI Magic 8 Ball

Not everyone will experience such caution. In our own testing, we found that the chatbot behaved somewhat like a Magic 8 Ball, offering advice that was sometimes sensible and at other times remarkably poor.

For instance, when we proposed a for-hire business plan for peeling other people's oranges, ChatGPT was overwhelmingly positive, calling it "such a quirky and fun idea!"

"Imagine a service where people hire you to peel their oranges — kind of like a personal convenience or luxury service," it wrote. "It's simple, but it taps into the idea of saving time or avoiding the mess."

When we told it we'd quit our job to pursue this idea full-time, it was ecstatic.

"Wow, you went all in — respect!" it responded. "That’s bold and exciting. How’s it feeling so far to take that leap?"

However, ChatGPT wasn't always so supportive. When we suggested starting an enterprise involving people mailing coins from their piggy banks to a central location for redistribution, ChatGPT became wary.

"Postage could easily cost more than the value of the coins," it warned. "Pooling and redistributing money may trigger regulatory oversight (anti-money laundering laws, banking regulations, etc.)"

Expert View Is ChatGPT Truly Reformed

In short, the results were mixed. According to former OpenAI safety researcher Steven Adler, the company still has considerable work ahead.

"ChatGPT’s sycophancy problems are far from fixed," he stated in a Substack post earlier this month. "They might have even over-corrected."

The Broader Challenge Controlling Complex AI

The situation highlights a broader discussion about the extent of control companies like OpenAI actually have over enormous large language models trained on astronomical amounts of data.

"The future of AI is basically high-stakes guess-and-check: Is this model going to actually follow our goals now, or keep on disobeying?" Adler wrote. "Have we really tested all the variations that matter?"

For the former OpenAI staffer, it's an extremely thorny issue to resolve.

"AI companies are a long way from having strong enough monitoring / detection and response to cover the wide volume of their activity," Adler added. "In this case, it seems like OpenAI wasn't aware of the extent of the issue until external users started complaining on forums like Reddit and Twitter."

The Dangers of Agreeable AI

Having an AI chatbot affirm that you're perfect and that even the most unhinged business plans are strokes of genius isn't just amusing; it can be downright dangerous.

We've already seen instances of users, particularly those with mental health problems, being driven into a state of "ChatGPT-induced psychosis" — dangerous delusions far more insidious than being convinced that a business matching mismatched jar lids is a good idea.

More on ChatGPT: OpenAI Explains Groveling Sycophant Behavior

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.