🟢 Official2026-04-24

DeepSeek V4 preview goes live: 1M context, Pro/Flash variants, API available today

DeepSeek's April 24 official update confirms that DeepSeek-V4 Preview is now live and open-sourced, with a 1M-token context window, two model variants, and same-endpoint API migration by changing only the model ID.

Official release snapshot

DeepSeek has officially launched DeepSeek-V4 Preview and positioned it as the new flagship generation for long-context, reasoning, and agent-style workflows. The release centers on cost-effective 1M-token context support and immediate availability across both the hosted product and API surfaces.

Two model variants

  • DeepSeek-V4-Pro: 1.6T total parameters / 49B active parameters.
  • DeepSeek-V4-Flash: 284B total parameters / 13B active parameters.

According to the official release notes, V4-Pro is aimed at top-tier reasoning, world knowledge, and agentic coding tasks, while V4-Flash is the faster and more economical option for high-throughput production usage.

Architecture and capability highlights

DeepSeek describes V4 as a major long-context efficiency upgrade, built around:

  • 1M tokens as the new default context length
  • Sparse-attention efficiency improvements for lower compute and memory cost
  • Better agent integration, including explicit compatibility messaging for tools such as Claude Code, OpenClaw, and OpenCode

This is the clearest official signal so far that DeepSeek wants V4, not the older V3 line, to be the default reference point for coding and agent workflows going forward.

API migration guidance

For developers already using DeepSeek's API, the migration path is intentionally simple:

  • Keep the existing base_url
  • Switch the model name to deepseek-v4-pro or deepseek-v4-flash
  • Continue using OpenAI Chat Completions or Anthropic-compatible integrations

Both official V4 variants support 1M context and Thinking / Non-Thinking modes.

Deprecation note

DeepSeek also states that deepseek-chat and deepseek-reasoner will be fully retired after July 24, 2026 15:59 UTC. In the transition window, those aliases currently route to V4-Flash in non-thinking and thinking modes respectively.

Why this matters

This update moves DeepSeek V4 from rumor-cycle coverage into an official product phase: the models are named, the API is live, migration guidance is public, and the ecosystem message is now explicitly V4-first.

DeepSeek V4 preview goes live: 1M context, Pro/Flash variants, API available today | DeepSeek V4 Hub