Technical Deep Dive
DeepSeek V4's achievement on Huawei Ascend chips is a masterclass in hardware-software co-optimization. The core challenge lies in the fundamental architectural differences between NVIDIA's CUDA ecosystem and Huawei's Da Vinci architecture. Ascend 910B uses a 7nm process with a HBM2e memory subsystem offering 1.2 TB/s bandwidth per chip, compared to H100's 2 TB/s HBM3. The interconnect topology also differs: Ascend uses a proprietary HCCS (Huawei Cache Coherence System) with a ring topology, while NVIDIA uses NVLink with a fully connected mesh.
DeepSeek's engineering team tackled this by:
- Operator Fusion: They rewrote the attention kernel to fuse multiple operations, reducing the number of HCCS cross-chip communications by 40%.
- Memory-Aware Scheduling: The training pipeline was restructured to maximize HBM utilization, achieving 85% of theoretical peak memory bandwidth versus ~70% on standard Ascend deployments.
- Custom Communication Primitives: They implemented a hierarchical all-reduce algorithm that respects the ring topology, reducing collective communication overhead by 30% compared to the default HCCL library.
A key open-source resource is the DeepSpeed4Ascend repository (now 2.1k stars on GitHub), which provides a set of optimized kernels and communication patterns specifically for Ascend hardware. The repo includes a detailed benchmark suite showing that for a 70B parameter model, the Ascend cluster achieves 92% of the token throughput of an equivalent H100 cluster on inference tasks, and 78% on training tasks.
| Metric | NVIDIA H100 (8x) | Huawei Ascend 910B (8x) | Performance Ratio |
|---|---|---|---|
| Inference Latency (70B, 2048 tokens) | 220 ms | 238 ms | 92% |
| Training Throughput (70B, BF16) | 1,200 tokens/s | 936 tokens/s | 78% |
| Peak Memory Bandwidth Utilization | 85% | 82% | 96% |
| Interconnect Latency (all-reduce 1GB) | 12 μs | 18 μs | 67% |
Data Takeaway: While the Ascend cluster lags in raw interconnect speed, the memory bandwidth utilization is nearly on par. The 78% training throughput ratio is the critical number—it means a 1000-chip Ascend cluster can match an 800-chip H100 cluster, making the cost-per-token competitive given Ascend's lower unit price.
Key Players & Case Studies
The key players in this ecosystem are DeepSeek (the model developer), Huawei (chip and hardware provider), and several Chinese cloud providers who have already deployed Ascend clusters.
DeepSeek has been a vocal advocate for hardware diversity. Their CTO stated in a recent internal memo that "the era of single-vendor dependence is over." They have published a technical report detailing their optimization methodology, which has been adopted by at least three other Chinese AI labs.
Huawei has been aggressively building out its software stack. The MindSpore framework (Huawei's answer to PyTorch) now supports automatic operator fusion for Ascend, and the latest version of CANN (Compute Architecture for Neural Networks) includes a graph compiler that can automatically apply some of the optimizations DeepSeek did manually. However, the ecosystem still lacks the maturity of CUDA—the developer tooling and debugging experience remain inferior.
Case Study: Baidu's ERNIE Bot
Baidu recently migrated a portion of its ERNIE 4.0 inference workload to Ascend 910B clusters. They reported a 15% increase in latency compared to their NVIDIA A100 clusters, but a 40% reduction in total cost of ownership due to lower chip pricing and preferential energy tariffs for domestic hardware in Chinese data centers.
| Company | Model | Hardware | Inference Latency (relative) | TCO (relative) | Adoption Status |
|---|---|---|---|---|---|
| Baidu | ERNIE 4.0 | Ascend 910B | +15% | -40% | Partial migration |
| Alibaba | Qwen2.5 | Ascend 910B | +22% | -35% | Pilot phase |
| ByteDance | Doubao | NVIDIA H100 | Baseline | Baseline | Full NVIDIA |
| Tencent | Hunyuan | Mix of A100/Ascend | +10% | -20% | Hybrid deployment |
Data Takeaway: The TCO advantage is the primary driver for adoption. Even with a 15-22% performance penalty, the 35-40% cost savings make domestic chips economically attractive for inference-heavy workloads, which constitute the majority of production AI traffic.
Industry Impact & Market Dynamics
This breakthrough reshapes the competitive landscape in several ways:
1. Supply Chain Resilience: Chinese AI companies now have a credible alternative to NVIDIA. This reduces the risk of future supply disruptions due to export controls. The market for AI chips in China is projected to grow from $12 billion in 2024 to $28 billion by 2027 (source: internal AINews market model). Ascend's share is expected to rise from 15% to 35% over that period.
2. Global Pricing Pressure: NVIDIA's monopoly pricing power is eroding. In Q1 2026, NVIDIA cut H100 prices by 10% in China (while raising them elsewhere), a direct response to Ascend's growing competitiveness.
3. Software Ecosystem Shift: Developers are now incentivized to write hardware-agnostic code. Frameworks like PyTorch are adding first-class support for Ascend via the torch_npu plugin. The number of GitHub repositories with "Ascend" or "昇腾" in their tags has grown 300% year-over-year.
| Metric | 2024 | 2025 | 2026 (Projected) |
|---|---|---|---|
| China AI chip market ($B) | $12 | $18 | $28 |
| Ascend market share (%) | 15% | 22% | 35% |
| NVIDIA China revenue ($B) | $8.5 | $7.2 | $6.0 |
| Number of Ascend-compatible models | 120 | 450 | 1,200+ |
Data Takeaway: The market is at an inflection point. The 300% growth in Ascend-compatible models indicates a network effect is taking hold—more models attract more developers, which attracts more investment in the software stack, creating a virtuous cycle.
Risks, Limitations & Open Questions
Despite the progress, significant challenges remain:
- Software Maturity: The CANN compiler still produces suboptimal code for certain dynamic shapes and sparse operations. Developers report that debugging Ascend kernels is 3-5x slower than CUDA due to limited profiling tools.
- Ecosystem Lock-in: Huawei's HCCS interconnect is proprietary. While it works well within a single cluster, multi-cluster scaling (e.g., 10,000+ chips) has not been demonstrated. NVIDIA's NVLink and InfiniBand have a decade of proven large-scale deployment.
- Geopolitical Risk: Further U.S. export controls could target the manufacturing of Ascend chips themselves (TSMC is the foundry for the 7nm node). Huawei is reportedly working with SMIC on a domestic 7nm process, but yields remain low (estimated at 30-40% versus TSMC's 90%+).
- Power Efficiency: Ascend 910B has a TDP of 310W versus H100's 700W, but performance-per-watt is still 15-20% lower due to the architectural differences. For hyperscale data centers, this translates to higher cooling and electricity costs.
AINews Verdict & Predictions
Verdict: DeepSeek V4 on Huawei Ascend is a genuine breakthrough, not a marketing stunt. The technical optimizations are real and reproducible. However, the narrative that "China has caught up to NVIDIA" is premature. The current state is more accurately described as "good enough for most workloads at a lower cost."
Predictions:
1. By Q1 2027, at least 30% of new AI inference deployments in China will use domestic chips, driven by TCO advantages. Training will remain 80%+ NVIDIA due to software maturity and scaling reliability.
2. Huawei will open-source parts of CANN within the next 12 months to accelerate ecosystem growth, following a strategy similar to Meta's PyTorch playbook.
3. A "hybrid cluster" architecture will emerge as the standard: NVIDIA for training and high-priority inference, Ascend for bulk inference and cost-sensitive workloads.
4. The U.S. will tighten export controls on chip manufacturing equipment, specifically targeting SMIC's 7nm node, in an attempt to slow Ascend's progress. This will trigger a new round of Chinese investment in domestic lithography.
What to watch next: The release of Huawei's Ascend 920 (expected late 2026), which promises HBM3 memory and a new interconnect fabric. If DeepSeek can replicate their optimizations on that platform, the gap to NVIDIA will narrow to single digits.