DeepSeek Returns With Powerful New V4 Model Lineup
- •DeepSeek V4 Pro ranks as second-strongest open-weights model on Artificial Analysis Intelligence Index.
- •New hybrid architecture introduces V4 Pro for reasoning and V4 Flash for cost-efficient inference.
- •V4 Pro claims top agentic performance among open-weights models in real-world task testing.
DeepSeek has officially reclaimed its position at the forefront of the open-weights AI landscape with the release of its V4 model lineup. This launch, featuring both V4 Pro and V4 Flash, marks the first significant architectural evolution for the lab since its predecessor. By introducing a two-tiered system, the engineering team is catering to two distinct industry needs: raw, high-performance reasoning power and streamlined, cost-effective inference for rapid, high-volume deployment.
The headline star, V4 Pro, boasts a massive 1.6 trillion total parameters. In rigorous benchmark testing, this model has surged to become the second-strongest open-weights reasoning model available today, trailing only the Kimi K2.6. Beyond pure reasoning capabilities, V4 Pro distinguishes itself in agentic performance, where it excels at navigating complex, multi-step work tasks that require autonomous decision-making and precise tool-use.
While the Pro variant is engineered for maximum capability, V4 Flash arrives as a leaner, more budget-conscious alternative. With 284 billion total parameters, it is designed for developers who require high-tier performance without the overhead of massive, token-intensive processing. It remains highly competitive on the intelligence-to-size frontier, making it an ideal candidate for production applications where operational cost-per-token is a primary constraint.
Every powerful tool comes with trade-offs, and the V4 series is no exception to this rule. While the models demonstrate marked improvements in knowledge retrieval—reflected in stronger scores on omniscience benchmarks—they also exhibit a notably high rate of hallucinations. This means that when the model encounters a query outside its effective knowledge base, it is prone to generating a fluent, confident, but factually incorrect response rather than indicating uncertainty.
This release underscores the ongoing maturation of the open-weights ecosystem, where labs are increasingly balancing raw computational power with practical accessibility. Developers now have a robust new set of options for scaling their AI initiatives, provided they account for the specific token-usage patterns and cost structures involved. As competition intensifies, DeepSeek’s ability to deliver high-tier reasoning in an accessible, open-format remains a vital dynamic for the research community.