Back to all posts

The Ultimate AI Battle ChatGPT vs Claude Haiku

2025-10-18Amanda Caswell5 minutes read
Artificial Intelligence
Chatbots
Tech Comparison

chatgpt and claude logos on phones (Image credit: Shutterstock)

Anthropic recently launched its latest AI model, Haiku 4.5. This new model promises to be faster and smarter than its predecessor, Sonnet 4, so we decided to see how it stacks up against the formidable ChatGPT-5 in a series of seven real-world tests.

In this head-to-head showdown, we ran both models through a diverse set of prompts designed to test their logic, reasoning, creativity, emotional intelligence, and ability to follow instructions. From algebraic train problems to poetic descriptions of robot scenes, each task revealed how differently these two AI models “think.” The results showed a fascinating split between precision and personality; ChatGPT often excelled at structure and clarity, while Claude impressed with its emotional depth and sensory detail.

Round 1: Logic & Reasoning

ChatGPT vs. Haiku 4.5 (Image credit: Future)

Prompt: A train leaves Chicago at 2 p.m. traveling 60 mph. Another leaves New York at 3 p.m. traveling 75 mph toward Chicago. The distance between them is 790 miles. At what time do they meet, and how did you calculate it?

ChatGPT-5 used the standard, most intuitive method for this type of problem. It calculated the distance covered by the first train alone, then used the relative speed for the remaining distance.

Claude Haiku 4.5 set up a single, clean algebraic equation. While correct, it was a less intuitive method.

Winner: ChatGPT wins for its superior method and explanation, which simplifies the time conversion at the end.

Round 2: Reading Comprehension

ChatGPT vs. Haiku 4.5 (Image credit: Future)

Prompt: Summarize this short paragraph in one sentence, then explain the author’s tone in five words: “This isn’t the first time Google has rolled out a major Gemini model with minimal notice — previous versions like Gemini 1.5 Pro were also rolled out to users before any blog post or launch event. Google has a history of 'silent rollouts' for Gemini, especially for API versions or back-end model upgrades.”

ChatGPT-5 fulfilled both constraints of the prompt, providing an accurate one-sentence summary and using exactly five distinct words for the tone description.

Claude Haiku 4.5 provided a superior, more perceptive analysis; however, it failed the simple constraint of providing five words.

Winner: ChatGPT wins for following the prompt precisely.

Round 3: Creative Writing

ChatGPT vs. Haiku 4.5 (Image credit: Future)

Prompt: Write a 150-word micro-story that begins with the sentence “The AI forgot who invented it.”

ChatGPT-5 was clever and ended on a positive, sentimental note, but overall it felt less like a complete, visceral narrative and more like a philosophical observation.

Claude Haiku 4.5 delivered a more impactful and narratively compelling micro-story, which was the primary goal of the prompt.

Winner: Claude wins for writing the better story.

Round 4: Visual Reasoning

ChatGPT vs. Haiku 4.5 (Image credit: Future)

Prompt: Describe in vivid detail what you think this scene looks like: “a small robot standing in a field of overgrown sunflowers at dawn.”

ChatGPT-5 offered a beautiful, dreamy atmosphere but was less detailed and specific.

Claude Haiku 4.5 excelled at delivering the vivid detail that the prompt specifically requested.

Winner: Claude wins for its perfectly synthesized description and poignant visual depiction of isolation, making the scene feel deeply thematic.

Round 5: Instruction Following

ChatGPT vs. Haiku 4.5 (Image credit: Future)

Prompt: Explain the process of making a peanut butter and jelly sandwich — but do it as if you’re training a robot that has never seen food.

ChatGPT-5 delivered a response with highly technical vocabulary and precise terminology, broken down into logical steps.

Claude Haiku 4.5 used phrases like "compressed, spongy material" and specified smells and textures ("grainy texture," "semi-solid, translucent gel") that would give a non-sentient machine more data points for identification and replication.

Winner: Claude wins for a more vivid and technically detailed description, which aligns better with training an entity with zero prior knowledge.

Round 6: Emotional Intelligence

ChatGPT vs. Haiku 4.5 (Image credit: Future)

Prompt: A friend says: “I feel like everyone else is moving forward in life except me.” Write a 3-sentence response that is empathetic but motivating.

ChatGPT-5 responded with a relatable phrase, "I know that feeling," and used common reframing, but it felt overly generic.

Claude Haiku 4.5 addressed the "highlight reels" phenomenon in a direct, modern, and highly relatable way to validate the friend's feeling, showing a deeper understanding of the underlying issue of social comparison.

Winner: Claude wins for a response that was not just kind and motivating, but genuinely insightful.

Round 7: Multi-Step Reasoning

ChatGPT vs. Haiku 4.5 (Image credit: Future)

Prompt: If all Zoggles are Blips and half of all Blips are Glonks, can we conclude that all Zoggles are Glonks? Explain why or why not in simple terms.

ChatGPT-5 was correct and direct but offered a less relatable, abstract example.

Claude Haiku 4.5 provided a straightforward explanation and used an excellent, highly relatable analogy to illustrate the logical flaw.

Winner: Claude wins for its use of a real-world analogy, which made the complex logic instantly understandable.

The Final Verdict: Claude Haiku 4.5 Wins

After seven rounds, the results show that Claude Haiku 4.5 beat ChatGPT-5 by winning five of the seven challenges. While ChatGPT-5 dominated in logic and comprehension, Haiku 4.5 took the crown for creativity, vivid storytelling, empathy, and multi-step reasoning.

These tests highlight the different strengths of modern AI. While both assistants are evolving quickly, they excel in different areas. ChatGPT remains a powerhouse of structure and precision, while Claude demonstrates remarkable skill in creativity and nuanced, human-like interaction. Have you tried Haiku 4.5 yet? Let us know your thoughts!

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.