Articles

Deep-dive AI and builder content

China AI News in English: Best Sources to Follow in 2026

If you need China AI news in English, start with the primary sources and work outward. In Q2 2026, two releases define the China AI news cycle: Qwen3 (April 2026, Apache 2.0, MMLU 87.1 for the 235B flagship; the 30B-A3B MoE variant matches GPT-4o-class benchmarks at the inference cost of a 3B model) and DeepSeek-R1-0528 (May 2026, AIME 2024 pass@1 72.6%, MATH-500 97.3%, GPQA Diamond 81.0%). Both are verifiable in under 10 minutes via the QwenLM GitHub or DeepSeek HuggingFace model cards. That pace — two landmark releases in six weeks, both open-weight and Apache 2.0 — is not an anomaly. It's the new baseline. The challenge for builders is not finding China AI news. It's building an information stack in English that catches what matters before it affects your assumptions.

This article maps that stack: a routing table organized by what you actually want to know, a source-by-source breakdown of what each covers and what it can't, a verification workflow for checking claims before you act on them, and a realistic picture of what to watch for in the second half of 2026.

Source Routing Table: I Want to Follow China AI News About…

This is the core routing table. Use the "NOT good for" column as actively as you use the "Primary source" column — each source has a structural blind spot.

I want to follow… Primary source Backup source NOT good for
Model releases (open-weight) QwenLM GitHub / DeepSeek HuggingFace Papers with Code Real-time API pricing; regional access gating
Model releases (API-only) Official English blogs (platform.deepseek.com, qwenlm.github.io) RadarAI China AI Updates Open-weight license details; export control guidance
Benchmark comparisons Chatbot Arena / model cards Official technical reports (GitHub-linked) Real-world production latency; cost at scale
API access & pricing changes Official platform pages (platform.deepseek.com, dashscope.aliyun.com) RadarAI API tracker Export compliance guidance; legal interpretations
China AI startup funding 36Kr Global / KR Asia TechCrunch AI China coverage Technical benchmark data; open-source licensing terms
China AI policy & regulation CSET Georgetown / DigiChina (Stanford) RadarAI policy tracker Model-level technical specs; product change logs
Enterprise deployment signals RadarAI enterprise tracker Official company announcements (English press releases) Open-source weights; academic paper details
Weekly digest (low noise) RadarAI China AI Updates Best Sites shortlist Breaking news; real-time pricing changes

A note on how to read this table: most builder frustration with China AI news in English comes from using a source outside its lane. 36Kr Global is excellent for funding context — it's one of the few English outlets that covers China's AI funding ecosystem in real time. But it rarely carries model-level benchmark numbers, and when it does, those numbers often come from press releases rather than model cards. Conversely, QwenLM GitHub has exact benchmark data and license terms within hours of a release, but won't tell you that a competing Chinese cloud provider has already repackaged Qwen3 into a product at half the API cost. You need both lanes covered.

Why China AI News Is Hard to Follow in English

The structural difficulty isn't language, though that's part of it. The deeper problem is that China AI news in English is scattered across five source types that rarely link to each other:

Primary technical sources (GitHub, HuggingFace) are where the actual model weights, benchmark data, and license terms live. Most of the information here is already in English because the research community's lingua franca is English. QwenLM's GitHub README, DeepSeek's HuggingFace model cards, and Kimi's English documentation are all directly readable. The problem is discoverability — you have to know to look for them, and changes often don't generate noise in general tech media for 24-48 hours.

Industry media (36Kr Global, KR Asia, TechNode) covers the business story: funding rounds, company announcements, strategic shifts. This coverage exists in English, but it's often several hours to a full news cycle behind Chinese-language versions of the same outlets. 36Kr publishes its top China AI funding and strategy stories in English at 36kr.com/global, but a story that breaks at 10am Beijing time may appear in English at 6pm Eastern — too late for builders who need to update their threat model by morning standup.

Policy analysis (CSET, DigiChina) is where you go when you need to understand what China's AI governance framework means for your ability to use Chinese models commercially. This material is slow by design — a good CSET brief takes weeks to write because it's triangulating across regulatory texts, enforcement actions, and strategic context. Don't expect this to be breaking news. Expect it to help you understand whether the trend you spotted in the primary sources has legs.

English-language general tech press (TechCrunch, The Verge, MIT Technology Review) covers China AI reactively — when a release is dramatic enough to warrant a standalone story. The Verge's coverage of DeepSeek-R1 in January 2026 was excellent, but that was a watershed moment. Routine model updates from Chinese labs often don't clear the editorial threshold for a major English tech outlet.

Signal aggregators (RadarAI, a small number of newsletters) do the routing work: scanning across all four other source types and surfacing what matters for builders. The value here is triage, not breaking news. RadarAI's China AI Updates page surfaces the week's most builder-relevant signals from across GitHub, HuggingFace, 36Kr, and official blogs — organized by category, with "NOT good for" context included.

Understanding these five types and their cadences explains why a single-source strategy never works. You'll always be either too slow (if you rely only on general tech press), too raw (if you rely only on GitHub), or too high-level (if you rely only on policy analysis).

Source Deep-Dive: What Each China AI News Source Actually Covers

QwenLM GitHub and HuggingFace

QwenLM GitHub is the fastest and most accurate source for Alibaba's Qwen model family. When Qwen3 launched in April 2026, the GitHub repo had:

  • The full model card with benchmark data (MMLU 87.1 for 235B, 79.4 for 30B-A3B with only 3B active params at inference)
  • The Apache 2.0 license file — confirming commercial use is permitted
  • Inference requirements and VRAM tables for different quantization levels
  • Links to the HuggingFace model pages where weights are downloadable

This is builder-grade information. A team evaluating whether to run Qwen3-30B-A3B locally on their inference cluster can answer every technical question from this single primary source. The one thing QwenLM GitHub doesn't tell you: what Alibaba Cloud is charging for API access to these models, and whether that pricing will undercut your current inference vendor.

The Qwen HuggingFace page mirrors model weights and often has community-submitted quantized versions (GGUF, GPTQ) that make the models more accessible. The community activity around a new Qwen release — number of downloads in the first 48 hours, quality of GGUF quantizations, early benchmark reproductions — is itself a signal worth reading.

DeepSeek HuggingFace and GitHub

DeepSeek's HuggingFace organization is the canonical source for DeepSeek model releases. DeepSeek-R1-0528, released in May 2026, was announced there with a full technical report: AIME 2024 pass@1 72.6% (up from 70.0% for the prior R1 release), MATH-500 97.3%, GPQA Diamond 81.0%. The model card included reasoning trace examples that showed the improved chain-of-thought consistency — context that a press release never provides.

One distinction builders often miss: DeepSeek publishes model weights on HuggingFace, but the platform.deepseek.com API may be running a different (often newer) version. When evaluating DeepSeek for production use, check both the HuggingFace model card and the platform API docs — they're not always synchronized.

36Kr Global and KR Asia

36Kr Global is the most comprehensive English-language source for China AI business and funding news. It's the outlet that will tell you that a Chinese AI startup just raised a $200M Series B, or that Baidu has announced an AI-powered product pivot, before those facts appear in TechCrunch's China coverage. The limitation is depth: 36Kr Global articles are typically 400-800 words, optimized for news brevity. You get the "what happened" but not the "what does this mean for benchmark assumptions."

KR Asia covers Southeast Asian tech but has substantial China AI coverage due to the regional deployment angle — Chinese AI tools that expand into Southeast Asia show up here first. For builders targeting Southeast Asian markets, KR Asia is worth adding to the stack.

CSET (Georgetown) and DigiChina (Stanford)

These two institutions produce the most rigorous English-language analysis of China AI policy. CSET focuses on the national security and export control dimensions: what chips China can access, how US export controls affect Chinese AI research capacity, and how Chinese AI governance differs from US/EU approaches. DigiChina focuses more on the domestic policy layer: China's AI governance framework, MIIT guidelines, and the regulatory context for deploying AI in China.

For most builders, these are monthly reads, not daily. The signal you're looking for: any change in the Apache 2.0 / commercial license landscape due to regulation, or any new export control that affects which Chinese models you can use in a US-regulated environment.

RadarAI

RadarAI functions as the signal layer that routes across all the above. The China AI Updates page is a weekly structured tracker that surfaces the most builder-relevant changes across model releases, API pricing, policy signals, and enterprise deployments. The format is explicitly designed to answer "what do I need to act on this week?" rather than "what happened?"

The China AI News hub provides the context layer: what's happening across the China AI landscape and why it matters, with source routing for each category. RadarAI is not a breaking news source and shouldn't be used as one — the weekly cadence is intentional. The value is in the triage and the "NOT good for" framing that makes acting on China AI signals less error-prone.

Chatbot Arena

Chatbot Arena (run by LMSYS) provides independent human preference rankings that don't rely on self-reported benchmark numbers. For China AI models specifically, it's the quickest way to answer "how does this model compare to GPT-4o or Claude 3.5 Sonnet on real-world tasks?" without relying on the lab's own press release. The limitation: Chatbot Arena reflects aggregate human preference across many task types, which doesn't tell you how a model performs on your specific use case (e.g., coding in a specific language, or financial document extraction).

How to Verify a China AI News Claim Before Acting On It

Before any China AI news item changes your roadmap or your model evaluation queue, run a three-step verification:

Step 1: Find the primary source. For a model release, the primary source is the GitHub repo (for open-weight models) or the official platform documentation (for API-only models). For a benchmark claim, the primary source is either the official model card on HuggingFace or an independent leaderboard like Chatbot Arena or Papers with Code. If a news article doesn't link to a primary source, treat the benchmark numbers as unverified until you find them yourself.

Example: You read that "a new Chinese model beats GPT-4 on coding benchmarks." Before updating your evaluation queue, go to HuggingFace or GitHub, find the model card, and check: 1. What benchmark specifically? HumanEval? MBPP? LeetCode-hard? The answer matters enormously. 2. Self-reported or independently reproduced? Model card numbers are self-reported. Papers with Code shows what independent labs have verified. 3. What date? Some "new" models have model cards that reference benchmarks from evaluation runs done months before the release.

Step 2: Check practical access. A model that's released but only accessible via a China-domestic API is not useful for builders outside China. Before adding a model to your evaluation queue, verify: - Is it available on HuggingFace for download? (Open-weight models) - Is the API accessible with a non-Chinese payment method? - Are there regional restrictions in the terms of service?

For Qwen3 and DeepSeek-R1-0528 in 2026, both answers are yes for open-weight downloads. The API situation is more nuanced — DeepSeek's API is accessible internationally but has had capacity constraints during peak periods post-launch.

Step 3: Check the license. Apache 2.0 (used by Qwen3 and most DeepSeek weights) allows commercial use with attribution. Some other Chinese AI models have more restrictive licenses — specifically, restrictions on commercial use above certain user thresholds, or requirements for separate enterprise agreements. Check the LICENSE file in the GitHub repo, not the marketing page.

For Qwen3: the Apache 2.0 license is in the QwenLM/Qwen3 repo LICENSE file. Verified. For DeepSeek-R1-0528: the model card on HuggingFace specifies the license terms. Verified.

China AI Developments Worth Watching in H2 2026

Beyond the two landmark releases that define Q2, several structural trends will shape China AI news in the second half of 2026:

Open-source as competitive strategy: Alibaba (Qwen) and DeepSeek have both made Apache 2.0 licensing central to their developer adoption strategy. This isn't altruistic — it's a deliberate choice to drive global developer lock-in to their model architectures before US labs can respond. The practical implication: expect continued high-quality open-weight releases from these labs through 2026. For builders, this means the quality ceiling for what you can self-host is rising every quarter.

Inference cost compression: SiliconFlow and competing Chinese inference providers have compressed API pricing for Qwen3 and DeepSeek models by 60-80% compared to January 2026 baselines. This changes the build-vs-buy calculation for any team running high-volume inference. The signal to watch: when does pricing compression on Chinese inference providers start undercutting US-based providers on non-China workloads?

Multimodal expansion: Kimi K2 and MiniMax's latest multimodal releases have expanded the competitive surface beyond text. For builders whose products touch document intelligence, image analysis, or audio processing, watch the Kimi English documentation and MiniMax official announcements for capability updates — these labs are less covered by the English press than Qwen or DeepSeek.

Enterprise deployment signals: The shift from research-grade API to production-grade enterprise contract is the signal that a Chinese AI capability has crossed the credibility threshold. KR Asia covers this well for Southeast Asian deployments; RadarAI's enterprise tracker covers it for international builders.

The 30-Minute Weekly China AI News Routine

For a builder who needs to stay informed on China AI developments without it dominating their information diet, a practical weekly routine:

Monday (10 minutes): Check the RadarAI China AI Updates weekly tracker for the past 7 days. This surfaces model releases, API changes, and enterprise signals that matter for builders. Note anything that needs a follow-up verification.

Tuesday (10 minutes, when needed): For any model or API change flagged on Monday, go directly to the primary source — QwenLM GitHub or DeepSeek HuggingFace — and verify benchmark claims and license terms. Add verified models to your evaluation queue.

Wednesday-Friday (5 minutes, as-needed): Check 36Kr Global for any major funding or strategic announcement that affects your competitive picture. This is a "when there's something big" check, not a daily obligation.

Monthly (30 minutes): Read one CSET or DigiChina brief to stay calibrated on the policy and export control context. The specific question to answer: has anything changed in the regulatory environment that affects which Chinese models you can use commercially or that affects Chinese AI research capacity?

Total: under 30 minutes per week for a complete picture. The discipline is in not reading everything — coverage overlap between sources is high, and the marginal value of a fourth daily source is near zero.

FAQ

Where can I find China AI news in English every day?

For daily China AI news in English, the most reliable combination is: GitHub and HuggingFace for model releases (check QwenLM, DeepSeek-ai, and Kimi organizations), 36Kr Global for business and funding news, and RadarAI's daily brief for a triage layer that surfaces what matters for builders. No single source covers all categories — the goal is a stack of 2-3 sources that together cover model releases, business signals, and a verification layer.

Are there English-language newsletters specifically for China AI?

Yes, but they're rare and of variable quality. RadarAI's weekly report includes a China AI section that covers model releases, API changes, and industry signals with explicit builder relevance framing. Sinocism (by Bill Bishop) covers China tech policy broadly, including AI. Stratechery occasionally covers China AI from a business strategy angle. Most general AI newsletters (The Sequence, Latent Space) treat China AI as one of many topics rather than giving it dedicated coverage.

Is Chinese-language news about China AI more accurate than English coverage?

Primary technical sources (GitHub, HuggingFace model cards) are already in English, so for model releases and benchmarks, there's no accuracy advantage to reading Chinese. For business news (funding rounds, company strategy), Chinese-language 36Kr and similar outlets are faster than 36Kr Global but require translation. For policy, official Chinese-language documents are primary — CSET and DigiChina translate and contextualize the most builder-relevant ones. The practical answer: for technical content, English primary sources are sufficient. For business and policy, English-language secondary sources have a lag but cover what matters for non-China builders.

How can I tell if a China AI benchmark claim is trustworthy?

Three questions to answer: (1) Is the benchmark self-reported in the model card, or independently verified on Chatbot Arena or Papers with Code? (2) What specific benchmark — MMLU covers general knowledge, AIME covers mathematical reasoning, GPQA covers expert-level science — and is that the capability you actually need? (3) Was the benchmark run using the model's best-case configuration (full precision, optimal prompt) or a configuration comparable to how you'd use it? Claims that fail any of these checks should be treated as unverified starting points, not decision inputs.

What's the difference between following China AI news and following China AI updates?

China AI news (this article, and the RadarAI China AI News hub) answers: what is happening across the China AI landscape and which sources cover which parts of it. China AI updates (RadarAI China AI Updates) answers: what specifically changed this week and what should I do about it. Use the news layer for source routing and orientation; use the updates layer for weekly action items.

Which Chinese AI labs should I track as a priority in 2026?

For builders, the four primary labs worth tracking directly: Alibaba (Qwen) for the most mature open-weight model family with Apache 2.0 licensing; DeepSeek for reasoning model capabilities and the best open-weight alternative to frontier closed models; Moonshot (Kimi) for multimodal and long-context capabilities; and Zhipu AI (GLM) for one of the most widely deployed Chinese AI APIs in enterprise contexts. Secondary watch list: MiniMax (audio and video multimodal), Baidu (ERNIE for China-domestic deployments), and ByteDance (Doubao for consumer-scale deployment data). For most non-China builders, the first four are sufficient.

Related Pages

Quotable Summary

China AI news in English is best tracked as a layered stack: GitHub and HuggingFace for model releases and benchmark verification (primary, fast), official English blogs for API and access changes (semi-primary, 2-24h lag), 36Kr Global and KR Asia for industry and funding context (business layer, daily), CSET and DigiChina for policy analysis (slow, high signal), and RadarAI as the weekly triage layer that routes across all four categories. In Q2 2026, the two releases that define the China AI news cycle are Qwen3 (April 2026, MMLU 87.1 for the 235B flagship, Apache 2.0, 30B-A3B MoE variant at 3B inference cost) and DeepSeek-R1-0528 (May 2026, AIME 2024 pass@1 72.6%, MATH-500 97.3%) — both verifiable in under 10 minutes via open-source model cards. The correct question is not "which one English China AI news site should I follow?" — it's "which sources cover which lanes, and what does each one miss?"

← Back to Articles