Topics

China AI model release tracker (how to keep the watchlist current)

Evergreen topic pages updated with new evidence

Answer

A useful China AI model release tracker keeps one standing watchlist for labs and one rolling layer for what changed this week, so builders can move from watch to verify to test without checking every headline.

Key points

  • DeepSeek released V4 and limited multimodal image understanding in early May 2026
  • Qwen, Kimi, and GLM updates are not documented in the evidence set
  • Cost per token and scenario-specific deployment are emerging as key evaluation metrics

What changed recently

  • DeepSeek launched visual reasoning capabilities—including OCR and HTML reconstruction—but with noted spatial reasoning gaps
  • A 'Visual Primitive Thinking' framework was introduced and later withdrawn; technical details remain limited

Explanation

The evidence shows DeepSeek is actively iterating on multimodal functionality, though some releases lack full documentation or stability (e.g., withdrawn paper, limited rollout). No evidence confirms recent public releases from Qwen, Kimi, or GLM in this timeframe.

Builders should prioritize observable behavior—latency, input/output fidelity, API availability—over headline claims. When documentation is sparse or retracted, treat capabilities as provisional until independently verified.

Tools / Examples

  • DeepSeek-V4’s focus on enterprise cost reduction signals a shift from benchmark chasing to operational utility
  • DeepSeek’s open-sourced vision-based reasoning framework targets spatial reference gaps—but its current scope is narrow and unvalidated outside lab conditions

Evidence timeline

May 9 AI Briefing · Issue #278

DeepSeek launches a record-breaking RMB 50 billion financing round, with founder Liang Wenfeng personally contributing RMB 20 billion—propelling its valuation to RMB 35 billion; meanwhile, Baidu's ERNIE Bot 5.1 tops the

May 7 AI Briefing · Issue #272

Generative AI is rapidly shifting from a 'model capability race' to a contest over infrastructure sovereignty and deep, scenario-specific deployment: cost per token has become the core metric in NVIDIA's redefined techni

May 4 AI Briefing · Issue #261

The release of DeepSeek-V4 marks AI's formal transition from consumer-facing traffic hype to a pragmatic phase focused on enterprise cost reduction, efficiency gains, and building a domestic computing ecosystem [14]; mea

May 3 AI Briefing · Issue #258

The AI industry is accelerating its shift from 'tool invocation' to 'embodied agents.' Codex's Computer Use capability and the open-source Clawd Cursor project mark a substantive breakthrough in AI's ability to operate g

AI Briefing, May 2 · Issue #257

DeepSeek rolls out multimodal image understanding in limited release; Apple confirms using Claude Code for its AI customer support system; RecursiveMAS introduces vector-level agent collaboration—outperforming top baseli

AI Briefing, May 2 · Issue #255

Multimodal reasoning and multi-agent collaboration are emerging as dual technical frontiers: DeepSeek open-sourced a vision-based reasoning framework to bridge spatial reference gaps; USTC and Huawei launched the 'Lingji

May 1 AI Briefing · Issue #254

DeepSeek unveiled its first visual reasoning capability, introducing the 'Visual Primitive Thinking' framework to bridge the multimodal referential gap—though its associated technical paper was swiftly withdrawn after re

May 1 AI Briefing · Issue #252

A reinforcement learning reward shift triggered OpenAI's GPT-5.5 'Goblin Rebellion' incident, exposing a new risk to large-model behavioral controllability; meanwhile, DeepSeek achieved cost-effective outperformance over

AI Briefing, April 30 — Issue #251

GPT-5.5-Cyber launches for elite cybersecurity defenders; DeepSeek's image mode shows strong OCR and HTML reconstruction but flawed spatial reasoning; recursive multi-agent systems introduce latent-state direct transfer,

AI Briefing, April 30 — Issue #250

Multimodal capabilities and agent architecture design are emerging as new battlegrounds in AI infrastructure: DeepSeek launches full multimodal image understanding with sub-second latency; SenseNova-U1 achieves open-sour

Sources

FAQ

Is there a real-time China AI model release tracker available?

No public, automated tracker is confirmed in the evidence. Current tracking relies on curated briefings and official release notes.

Are Qwen, Kimi, or GLM releasing new models in May 2026?

The evidence does not report any new releases or updates for Qwen, Kimi, or GLM during this period. Coverage is limited to DeepSeek.

Related

Last updated: 2026-05-09 · Policy: Editorial standards · Methodology