ChatGPT 5 Is Faster But Lacks Creative Spark
ChatGPT-5 gets straight to the point. For some users, this is a welcome change from its chattier predecessor, GPT-4o. For others, the new model feels like something is missing.
Despite the considerable hype leading up to the launch, OpenAI's GPT-5 doesn't feel like a monumental leap forward from 4o. The quality of responses is comparable to previous models, but the key difference lies in its speed and efficiency. Some answers are generated almost instantly in just a few words, while more complex requests can take over a minute to process. It remains an excellent tool for research and brainstorming, just with a more streamlined, computational approach.
(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Speed vs Creativity The User Divide
ChatGPT-5 isn't a single model but a dynamic blend of multiple AIs. For simple questions, it uses a smaller, "fast" model to provide quick answers. For more complex inquiries, it switches to a larger "thinking" model that takes more time to deliver a comprehensive output.
This hybrid system allows ChatGPT-5 to be incredibly efficient, matching the speed of services like Google's AI Overviews for some queries while leveraging OpenAI's massive server power for deeper questions. So, why the mixed reactions?
Some fans on Reddit argue that OpenAI has sacrificed ChatGPT's creativity. They miss the randomness and more engaging personality of GPT-4o. The backlash was significant enough that OpenAI brought back GPT-4o for paying subscribers and began tweaking GPT-5 to feel more approachable.
We’re making GPT-5 warmer and friendlier based on feedback that it felt too formal before. Changes are subtle, but ChatGPT should feel more approachable now. You'll notice small, genuine touches like “Good question” or “Great start,” not flattery. Internal tests show no rise in… — OpenAI (@OpenAI) August 15, 2025
Deeper Insights When You Have Time to Wait
When ChatGPT-5 engages its "thinking" mode, the wait time increases, sometimes to a few minutes. However, the quality of the response is often worth it. For queries requiring in-depth research across multiple sources, GPT-5 delivers a thorough breakdown. For instance, a request to research people who fell in love with AI companions took 2 minutes and 20 seconds, resulting in a list of background research, relevant OpenAI policies, and different angles to consider for the topic.
The experience feels like a supercharged version of the free ChatGPT. The model favors heavily sourced and detailed bullet points over long, essay-style paragraphs. When tasked with finding a fix for a Rock Band drum set with corroded battery terminals, GPT-5 provided a solid list of repair suggestions, supplies to purchase, and soldering steps—a more detailed answer than what GPT-4o produced for the same query.
Everyday Use Cases Shopping and Image Creation
As a shopping assistant, GPT-5 is still a go-to choice, though it can sometimes get mired in detail instead of providing a straightforward list of product recommendations. It does, however, excel at integrating product photos and providing direct links to storefronts.
The model's image generator seems to be on par with GPT-4o. It can adeptly transform your pictures into Ghibli-inspired art, but achieving the desired result can still require significant back-and-forth prompting. The system particularly struggles with consistency when creating higher DPI images, so it's best to keep requests under 1,500 pixels.
Is It Better Or Just More Efficient
Ultimately, ChatGPT-5 doesn't feel drastically different from GPT-4o, especially after recent tweaks. It is undeniably faster, but whether that speed attracts or alienates users remains to be seen.
The new model feels like a strategic move toward efficiency rather than a major leap in capability. This may be intentional. The long, drawn-out conversations people had with GPT-4o were likely contributing to AI delusion and, more practically, eating up server costs. By making ChatGPT a bit more terse, OpenAI is likely saving a significant amount of money.
This theory is supported by CEO Sam Altman's past comments about the company's GPU crunch and how even polite phrases like "thank you" were costing millions. With 700 million weekly users, OpenAI likely cannot afford to roll out a more powerful and resource-intensive model for everyone. Altman has even acknowledged that better models exist but that the company lacks the capacity to run them at scale.
So no, ChatGPT-5 isn't the Death Star. It can't destroy planets, and it couldn't even fully replace its predecessor. It is, perhaps, more like a stormtrooper who occasionally hits his target.