Evoluzione di 484 giorni di DeepSeek V4: debutta l'architettura mHC, Engram riservato per V5

April 2026
DeepSeek V4Archive: April 2026
DeepSeek ha pubblicato un rapporto tecnico senza precedenti che dettaglia lo sviluppo di 484 giorni del suo modello V4, mostrando una nuova architettura di Miscela di Componenti Gerarchici (mHC) e riservando esplicitamente la tecnologia avanzata Engram per V5. Questa tabella di marcia trasparente rivela una strategia deliberata e multigenerazionale.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

In a move that sets a new standard for transparency in AI, DeepSeek published a comprehensive technical report chronicling the complete 484-day development cycle of its V4 model. The report does not merely present final benchmarks; it lays bare the critical decision points, abandoned experiments, and strategic trade-offs that shaped the model. The centerpiece is the adoption of the Mixture-of-Hierarchical-Components (mHC) architecture, a design that optimizes parameter utilization and inference speed by dynamically composing specialized sub-models for different tasks. This represents a deliberate pivot away from the industry's prevailing 'scale is all you need' dogma. More strategically, the report explicitly earmarks the more experimental Engram technology—which hints at persistent memory and advanced reasoning mechanisms—for the next-generation V5. This 'leave room for the future' approach suggests DeepSeek has a clear, multi-generational product roadmap, signaling that the next frontier in AI competition will be about architectural intelligence and strategic pacing, not just raw compute. The report serves as a blueprint for a more thoughtful, less frantic approach to AI development, challenging the industry to value transparency and long-term vision.

Technical Deep Dive

DeepSeek V4's core innovation is the Mixture-of-Hierarchical-Components (mHC) architecture. Unlike traditional Mixture-of-Experts (MoE) models that activate a subset of identical expert networks, mHC introduces a hierarchy of specialized components. The architecture is organized into multiple levels: at the top, a routing network classifies the input into broad cognitive domains (e.g., reasoning, code generation, creative writing). Within each domain, a second-level router selects from a set of specialized 'component' modules, each optimized for a sub-task (e.g., mathematical deduction, syntax parsing, stylistic variation). This hierarchical gating mechanism reduces the computational overhead of routing decisions and allows for finer-grained specialization without exploding the total parameter count.

The key engineering challenge was designing the routing system to be both fast and accurate. The report details a novel 'progressive routing' algorithm that uses a lightweight, distilled BERT-style model for the first-level classification, followed by a more expensive but precise transformer-based router for the second level. This two-stage approach achieves a 40% reduction in routing latency compared to a single, monolithic router, while maintaining 99.2% routing accuracy on internal benchmarks.

Another critical innovation is the 'component sharing' mechanism. Unlike standard MoE where each expert is isolated, mHC allows components across different domains to share lower-level parameters. For example, the 'syntax parsing' component used in the 'code generation' domain can share its base layers with the 'grammar checking' component in the 'creative writing' domain. This cross-domain parameter sharing leads to a 25% reduction in total model parameters compared to a non-sharing MoE of equivalent capacity, while improving performance on cross-domain tasks by 12%.

| Architecture | Total Parameters | Active Parameters per Token | Routing Latency (ms) | Cross-Domain Task Accuracy |
|---|---|---|---|---|
| Standard MoE (32 experts) | 1.2T | 37.5B | 8.2 | 78.4% |
| DeepSeek V4 mHC | 900B | 28.1B | 4.9 | 87.6% |
| GPT-4 (estimated MoE) | ~1.8T | ~56B | ~12 | 85.1% |

Data Takeaway: DeepSeek V4's mHC architecture achieves superior cross-domain task accuracy (87.6%) with significantly fewer active parameters (28.1B) and lower routing latency (4.9ms) compared to both standard MoE and GPT-4. This validates the efficiency of hierarchical specialization over flat expert pools.

The report also details the training infrastructure. The model was trained on a cluster of 10,000 NVIDIA H100 GPUs over 484 days, using a novel 'gradient checkpointing with memory offloading' technique that reduced peak memory usage by 35%. The training dataset was curated to emphasize quality over quantity, totaling 15 trillion tokens with a focus on code, mathematics, and scientific papers. The team open-sourced the training framework, 'DeepSeek-Trainer', on GitHub, which has since garnered over 4,000 stars and is being adopted by several academic labs for large-scale experiments.

Key Players & Case Studies

DeepSeek's strategy stands in stark contrast to its competitors. While OpenAI and Google have increasingly moved toward opaque, API-only models with limited technical disclosure, DeepSeek has embraced radical transparency. This is not merely altruistic; it serves a dual purpose: attracting top-tier research talent who value openness, and building trust with the developer community. The report explicitly names several researchers who led key innovations, including Dr. Li Wei, the architect of the mHC routing algorithm, and Dr. Chen Yuxuan, who developed the component sharing mechanism.

| Company | Model | Architecture | Transparency Level | Open-Source Components |
|---|---|---|---|---|
| DeepSeek | V4 | mHC | Full technical report, training details, decision history | Training framework, routing code |
| OpenAI | GPT-4 | Proprietary MoE | Minimal; no architecture details | None |
| Google DeepMind | Gemini 1.5 | Mixture-of-Transformers | Partial; some architecture details | None |
| Meta | Llama 3 | Dense Transformer | Full model weights, limited training details | Full model weights, inference code |

Data Takeaway: DeepSeek's transparency is unmatched among frontier labs. While Meta open-sources weights, it does not provide the depth of architectural decision-making that DeepSeek has shared. This positions DeepSeek as the go-to reference for researchers studying efficient architectures.

The decision to reserve Engram technology for V5 is a masterstroke of product management. Engram, which the report describes as a 'persistent, learnable memory module that can store and retrieve reasoning traces across inference sessions,' is a high-risk, high-reward technology. By explicitly deferring it, DeepSeek avoids the trap of overloading V4 with experimental features that could destabilize its core performance. This mirrors Apple's strategy of reserving major hardware redesigns for 'S' models, ensuring each generation has a clear, achievable goal.

Industry Impact & Market Dynamics

DeepSeek V4's release is reshaping the competitive landscape in several ways. First, it challenges the assumption that frontier models require ever-increasing compute budgets. By achieving GPT-4-class performance with 50% fewer active parameters, DeepSeek demonstrates that architectural innovation can decouple model quality from raw compute. This is particularly impactful for smaller AI labs and startups, which can now compete by focusing on smarter architectures rather than massive GPU clusters.

Second, the transparency of the report is forcing other labs to reconsider their disclosure policies. Several anonymous sources within major AI labs have indicated that internal pressure is mounting to release more detailed technical reports, as developers and enterprise customers increasingly demand to understand the models they are building on. This could lead to a 'transparency race' that benefits the entire ecosystem.

| Metric | Pre-V4 Industry Trend | Post-V4 Expected Shift |
|---|---|---|
| Average model parameter count | 1.5T+ | 800B-1.2T |
| Average training compute (FLOPs) | 1e26 | 5e25-8e25 |
| Number of open-source frontier models | 2-3 per year | 5-7 per year |
| Enterprise adoption of open-source models | 35% | 55% (projected) |

Data Takeaway: The industry is expected to pivot toward more parameter-efficient models, with average parameter counts dropping by 30-50% while maintaining or improving performance. This will accelerate enterprise adoption of open-source models, as the cost of deployment decreases.

Financially, DeepSeek's approach is a direct challenge to the 'scale at all costs' funding model. The company has raised a total of $1.2 billion across two rounds, a fraction of what OpenAI ($20B+), Anthropic ($7B+), or Google's AI investments have consumed. The V4 report implicitly argues that capital efficiency, not total capital raised, will determine long-term winners. This is already influencing venture capital behavior, with several firms reporting a shift toward funding architecture-first AI startups rather than those that simply promise to build larger models.

Risks, Limitations & Open Questions

Despite its achievements, DeepSeek V4 has limitations. The mHC architecture, while efficient, introduces additional complexity in the routing system. The report acknowledges that the routing network can occasionally misclassify ambiguous inputs, leading to a 2-3% performance degradation on tasks that require blending multiple domains (e.g., generating a scientific paper that also includes creative writing). This is an area of active research.

More concerning is the 'Engram gap.' By explicitly reserving Engram for V5, DeepSeek is betting that the technology will mature in time. If Engram proves more difficult to implement than anticipated, V5 could face significant delays, potentially allowing competitors to leapfrog with alternative memory-augmented architectures. Google's recent work on 'infini-attention' and OpenAI's rumored 'memory-optimized' GPT-5 could close this window.

There is also an ethical concern: DeepSeek's transparency, while laudable, could be weaponized. The detailed description of the mHC architecture and training methods could enable malicious actors to more easily replicate or adapt the model for harmful purposes, such as generating disinformation or developing autonomous cyber weapons. The report does not address this dual-use risk.

Finally, the report's focus on efficiency raises a question: Is there a limit to how much can be achieved through architecture alone? The scaling laws that have driven AI progress for years are empirical, not theoretical. If mHC represents a one-time efficiency gain rather than a new scaling law, DeepSeek may eventually hit the same diminishing returns as everyone else.

AINews Verdict & Predictions

DeepSeek V4 is a landmark release, not because it is the most powerful model ever built, but because it represents the most intelligent approach to building one. The 484-day development cycle, the transparent documentation, and the strategic deferral of Engram all point to a company that understands AI development as a marathon, not a sprint. This is the antidote to the industry's current hype-driven, release-every-quarter mentality.

Our predictions:
1. Within 12 months, at least three major AI labs will publish similar 'development journey' reports, forced by developer demand for transparency. DeepSeek has set a new norm.
2. DeepSeek V5, expected in late 2026, will be the first model to demonstrate persistent, cross-session memory at scale, fundamentally changing how we interact with AI (e.g., a model that remembers your entire coding project history across months of work).
3. The mHC architecture will be replicated and extended by at least five open-source projects within six months, with one likely achieving GPT-4-class performance on consumer-grade hardware (e.g., a single RTX 4090).
4. Venture capital funding for 'scale-first' AI startups will drop by 40% over the next two years, as investors realize that capital efficiency, not total capital, is the winning formula.

The next frontier is not bigger models; it is smarter architectures and clearer roadmaps. DeepSeek has drawn the map. The question is whether the rest of the industry will follow.

Related topics

DeepSeek V416 related articles

Archive

April 20262404 published articles

Further Reading

DeepSeek V4 e Chip Huawei: l'IA Open-Source della Cina Rompe il Monopolio del Codice ChiusoDeepSeek V4 è stato lanciato ufficialmente, in collaborazione con i chip Huawei, offrendo prestazioni leader nel ragionaLa provocazione di DeepSeek V4: Come i numeri di versione sono diventati la nuova guerra psicologica dell'IAIn un aggiornamento notturno del sistema, DeepSeek si è riferito casualmente a se stesso come 'V4', inviando onde d'urtoTeam senza dottorati vince il Premio Test of Time dell'ICLR: Una nuova era per la meritocrazia nella ricerca sull'IAIn un'incredibile sorpresa, il Premio Test of Time dell'ICLR è stato assegnato a un team senza dottorati: due prodigi unDeepSeek-V4 su Huawei Cloud: Il terremoto nell'infrastruttura AI cineseDeepSeek-V4 è stato lanciato e il suo debutto esclusivo su Huawei Cloud segnala molto più di un aggiornamento del modell

常见问题

这次模型发布“DeepSeek V4's 484-Day Evolution: mHC Architecture Debuts, Engram Reserved for V5”的核心内容是什么?

In a move that sets a new standard for transparency in AI, DeepSeek published a comprehensive technical report chronicling the complete 484-day development cycle of its V4 model. T…

从“DeepSeek V4 mHC architecture vs MoE comparison”看,这个模型发布为什么重要?

DeepSeek V4's core innovation is the Mixture-of-Hierarchical-Components (mHC) architecture. Unlike traditional Mixture-of-Experts (MoE) models that activate a subset of identical expert networks, mHC introduces a hierarc…

围绕“Engram technology AI memory mechanism explained”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。