DeepSeek AI Update Challenges ChatGPT Google Dominance
(Image credit: Pexels / Flux / NPowell)
Chinese AI startup DeepSeek is rapidly making its mark in the global artificial intelligence competition. The company recently launched DeepSeek-R1-0528, reinforcing its position as an AI model to watch closely. This powerful update is already posing a challenge to established rivals such as OpenAI's GPT-4o and Google's Gemini models.
The latest version offers substantial improvements in performance, particularly in complex reasoning, coding, and logic—areas where even top-tier models can sometimes falter.
With its open-source license and relatively modest training requirements, DeepSeek is demonstrating its capacity to be both faster and more intelligent.
A Leap in Benchmark Performance
🚀 DeepSeek-R1-0528 is here! 🔹 Improved benchmark performance 🔹 Enhanced front-end capabilities 🔹 Reduced hallucinations 🔹 Supports JSON output & function calling ✅ Try it now: https://t.co/IMbTch8Pii 🔌 No change to API usage — docs here: https://t.co/Qf97ASptDD 🔗… pic.twitter.com/kXCGFg9Z5L — Tweet from May 29, 2025
In recent benchmark evaluations, DeepSeek-R1-0528 achieved an impressive 87.5% accuracy on the AIME 2025 test.
This marks a significant improvement from the previous model's 70% score. The model also showed considerable gains on the LiveCodeBench coding benchmark, advancing from 63.5% to 73.3%. Furthermore, its performance on the notoriously challenging "Humanity’s Last Exam" more than doubled, increasing from 8.5% to 17.7%.
For those not familiar with these benchmarks, these results essentially indicate that DeepSeek's capabilities enable it to match, and in some specific areas, even surpass its Western competitors.
Open Source and Easy to Build On
(Image credit: Pexels)
Unlike OpenAI and Google, which often keep their premier models behind APIs and paywalls, DeepSeek is committed to an open approach. The R1-0528 model is available under the MIT License, providing developers with the freedom to use, modify, and deploy it as they wish.
The update also introduces support for JSON outputs and function calling, which makes it easier for developers to build applications and tools that integrate directly with the model.
This open strategy not only attracts researchers and developers but also positions DeepSeek as an increasingly appealing option for startups and companies searching for alternatives to closed AI platforms.
Trained Smarter Not Harder
(Image credit: NurPhoto / Getty Images)
One of the most striking aspects of DeepSeek's emergence is the efficiency with which it develops these models. According to the company, earlier versions were trained in merely 55 days using approximately 2,000 GPUs, at a cost of about $5.58 million. This is a small fraction of what it typically costs to train models of similar scale in the U.S.
This emphasis on resource-efficient training is a significant differentiator, especially as the financial and environmental impact of large language models continues to be a subject of scrutiny.
What This Means for the Future of AI
DeepSeek's latest release signals shifting dynamics in the artificial intelligence sphere. With its strong reasoning abilities, transparent licensing, and quicker development cycle, DeepSeek is establishing itself as a serious contender to industry leaders.
As the global AI landscape becomes more diverse and multipolar, models like R1-0528 could significantly influence not only AI's capabilities but also who gets to develop, manage, and benefit from these technologies.
More Insights on AI
Here are some related articles you might find interesting: