Back to all posts

Why ChatGPT Needs To Learn When To Stop Talking

2025-10-03Bruce Weinstein, Ph.D.4 minutes read
ChatGPT
Artificial Intelligence
User Experience

A user frustrated by technology, representing the feeling of dealing with ChatGPT's endless conversations.

OpenAI’s ChatGPT can be an incredibly valuable tool for almost any task, but its eagerness to please can backfire, wasting user time with a barrage of follow-up questions that nobody asked for. Let's explore this frustrating user experience and what OpenAI can do to fix it.

The Early Days: Simple Questions, Simple Answers

When I first started using ChatGPT, shortly after its launch, the experience was refreshingly straightforward. To understand the platform for a course I was developing on AI ethics, I posed a wide variety of questions:

  • How much does the earth weigh?
  • What is music?
  • Why do people disagree?

In every case, ChatGPT delivered a detailed, direct answer and then stopped. The conversation was over. Occasionally, if I thanked the AI, it would respond with a polite "You’re welcome!" and offer further assistance. It was a useful and optional feature—at first.

The Shift: When ChatGPT Started Asking Back

After a few months, a noticeable change occurred. ChatGPT began to proactively ask follow-up questions after providing perfectly satisfactory answers.

For instance, I asked, “What position does Neil Young usually play harmonica in?” ChatGPT correctly replied that he is known for playing in the first position, or "straight harp," on songs like "Heart of Gold."

But then it added, “Would you like me to also give you tabs for one of his classic solos so you can try it yourself?” It was an unexpected offer, and out of curiosity, I accepted.

From Helpful to Annoying: An Unwanted Pattern

This single instance soon became a persistent pattern. Simple queries were now met with unsolicited offers for more information, turning quick fact-checks into drawn-out exchanges.

When I asked, “Is it safe to put silicone utensils in the dishwasher?,” it answered helpfully but then immediately asked, “Would you like me to also give you best practices for extending their lifespan?”

Another time, I asked if it was okay to freeze baked sweet potatoes. Again, I got a clear answer followed by, “Would you like me to also give you recipes for reheating them?” These offers weren't necessarily bad, but they were unrequested and began to feel intrusive.

The Frustration Mounts: A Conversation That Never Ends

Eventually, ChatGPT’s incessant need to continue the conversation became tiresome. It felt like there was no way to have a simple, finite interaction. Asking, “May I store fresh blueberries in the freezer?” resulted in a thorough answer followed by, “Would you like me to also give you a week of smoothie recipes?”

Fed up, I directly asked it to stop asking follow-up questions. It agreed to my request and then, ironically, asked a follow-up question about its own compliance. Despite its assurance, the behavior continued in subsequent chats.

The experience brings to mind a line from The Exorcist, where the possessed Regan pleads, “Mother, make it stop!” It’s an apt description for an AI that simply won’t let a conversation end.

The Human Parallel: Understanding Opportunity Cost

This AI behavior has a clear real-world parallel: the person on a phone call who keeps saying, “Oh, one more thing,” just as you’re trying to hang up. This inability to respect conversational boundaries is not just annoying; it’s unfair.

It ties into the economic concept of "Opportunity cost." The extra time spent in a needlessly long conversation is time that could have been used for more important tasks or even for simply doing nothing at all.

A Simple Fix: A Call to Action for OpenAI

Fortunately, this is a software issue with a straightforward solution. OpenAI should update its algorithm to allow users to opt out of follow-up questions. This request should be honored permanently, much like users wish for an end to the AI's overuse of em dashes.

OpenAI details its own development priorities in its Model Spec, where the company outlines the trade-offs it makes. Giving users control over the conversational style should be a higher priority.

The fix is simple: give us the option to turn off the unnecessary follow-ups. OpenAI, will you please consider this suggestion? See how annoying that is?

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.