За пределами 140 триллионов токенов: почему искусственному интеллекту Китая необходимо перейти от масштаба к созданию ценности

The Chinese AI landscape is undergoing a profound strategic realignment. The collective push to amass training data has culminated in a symbolic 140-trillion-token threshold, a testament to immense computational investment. However, this milestone has simultaneously exposed a critical vulnerability: scale alone does not guarantee utility, market fit, or economic viability. The industry's focus is now pivoting decisively from parameter counts and token volumes to the architecture of intelligence itself—specifically, towards multimodal systems, autonomous agent frameworks, and world models capable of complex reasoning and action. This shift is driven by a growing recognition that the most significant bottlenecks are no longer in compute or data, but in product design, application-layer creativity, and the discovery of durable business models beyond simple API calls. Success in this new phase will be determined by the ability to embed AI deeply into high-value verticals like advanced manufacturing, biotechnology, and enterprise software, transforming raw computational power into measurable productivity gains and novel services. The race to build the biggest model is over; the race to build the most useful one has just begun.

Technical Deep Dive

The 140-trillion-token milestone represents a quantitative ceiling for purely linguistic scaling. Research from pioneers like DeepSeek's CEO, Liang Ya, and scholars such as Tsinghua's Tang Jie suggests that returns on dense, monolingual text data begin to diminish sharply beyond this scale. The frontier has moved to architectural efficiency and integration.

The next-generation stack is defined by three layers: Multimodal Foundation Models, Agentic Middleware, and World Models. Companies like Alibaba's Qwen team and 01.AI are leading in multimodal integration, moving beyond simple image captioning to true interleaved understanding of text, code, diagrams, and video within a single, cohesive reasoning process. The technical challenge is moving from a pipeline of separate encoders to a unified, next-token-prediction paradigm across all modalities, as seen in models like Qwen2-VL.

Agent frameworks represent the operational layer. Open-source projects like DB-GPT and ChatDev are critical here. DB-GPT (GitHub: `csunny/DB-GPT`, ~12k stars) is an experimental framework for creating domain-specific agents that can plan, use tools, and interact with databases autonomously. Its recent progress includes integrating with local LLMs for private deployment, a key demand for enterprise adoption. These frameworks move AI from a conversationalist to an executor.

The most speculative but consequential area is World Models—AI systems that build internal simulations of physical or digital environments to reason about cause and effect. While global leaders like Google's DeepMind pursue this, Chinese labs like Shanghai AI Laboratory are investing in embodied AI and simulation platforms to ground LLMs in realistic dynamics.

| Technical Paradigm Shift | Old Focus (Scale Era) | New Focus (Value Era) |
|---|---|---|
| Primary Metric | Parameters, Training Tokens | Task Completion Rate, ROI, User Retention |
| Model Architecture | Dense, Monolingual Decoders | Sparse Mixture-of-Experts, Unified Multimodal |
| System Design | Single, monolithic LLM | Composable Agents with specialized tools |
| Training Data | Web-scale text scraping | High-quality, curated, multi-domain (scientific, technical) |
| Inference Cost | High, uniform | Optimized, dynamic (via MoE, quantization) |

Data Takeaway: The table illustrates a comprehensive paradigm shift across every layer of the AI stack. Value creation is being engineered in through architectural choices (MoE for cost), system design (agents for capability), and data strategy (curation for quality), moving decisively away from the one-dimensional scaling of the past.

Key Players & Case Studies

The competitive landscape is stratifying into distinct camps based on their adaptation to the value-creation imperative.

The Cloud Integrators (Alibaba Cloud, Tencent Cloud, Baidu AI Cloud): Their strategy is to leverage AI as a catalyst for cloud consumption. Alibaba's Qwen series, particularly Qwen2.5, is notable for its strong coding and multilingual capabilities, offered aggressively through its cloud platform. The bet is that compelling AI services will lock enterprises into their broader cloud ecosystem. Success is measured not by model downloads but by cloud revenue growth and developer engagement on their platforms.

The Vertical Specialists (iFlytek, SenseTime, Horizon Robotics): These players are betting that deep domain expertise will trump general-purpose prowess. iFlytek has doubled down on education and healthcare, embedding its Spark models into classroom tools and medical transcription systems. Their value proposition is regulatory compliance, domain-specific fine-tuning, and integration with existing hardware and workflows. SenseTime, despite challenges, continues to push AI integration into smart city management and industrial inspection.

The Open-Source Challengers (01.AI, DeepSeek, Zhipu AI): This group is using open-source as a wedge for adoption and innovation. 01.AI's Yi series, under the leadership of Kai-Fu Lee, has gained international recognition for its performance-per-parameter efficiency. Their strategy is to build a global developer community, fostering an ecosystem of applications built on their models, from which they can monetize through enterprise support and premium versions. DeepSeek's commitment to fully open-sourcing its models, including the recent DeepSeek-V2 with its innovative MLA architecture, is a radical bet on ecosystem-driven value creation.

| Company / Model | Core Value Strategy | Key Differentiator | Risk |
|---|---|---|---|
| Alibaba / Qwen | Cloud Ecosystem Driver | Strong multimodal & coding, tight cloud integration | Becoming a cost-center feature rather than a profit center |
| 01.AI / Yi | Open-Source Ecosystem | International appeal, efficiency (MoE architecture) | Monetizing a freely available model |
| iFlytek / Spark | Vertical Domain Lock-in | Deep integration in education, healthcare, government | Narrow market dependence, policy vulnerability |
| DeepSeek | Radical Open-Source & Research | Architectural innovation (e.g., DeepSeek-V2), pure model focus | Lack of clear commercial path, reliant on funding |

Data Takeaway: No single strategy dominates. The table reveals a fragmented battlefield where success hinges on executing a chosen path—cloud integration, vertical depth, or open-source community—with extreme focus, as each path carries distinct commercialization risks.

Industry Impact & Market Dynamics

The shift from scale to value is triggering a massive reallocation of capital and talent. Venture funding is flowing away from generic foundation model startups and towards AI-native applications and agent infrastructure. The total addressable market (TAM) is being redefined from "AI software" to "AI-driven productivity gains" in specific sectors.

In enterprise software, companies like Kingsoft and Yonyou are racing to inject AI agents into their ERP and office suites, promising to automate complex workflows like financial reporting and supply chain analysis. The competition is no longer about whose API is cheaper, but whose AI can understand a business's unique processes and data.

The consumer space is seeing a similar transformation. ByteDance's Doubao and other chatbot apps are under pressure to move beyond novelty entertainment. The integration of AI into short-video creation tools, e-commerce customer service, and personalized content recommendation represents the path to sustainable engagement and revenue.

A critical market dynamic is the emergence of the Inference Economy. As models are deployed at scale, the cost and speed of inference become primary competitive factors. This favors companies that have invested in inference optimization, like Baidu with its PaddlePaddle ecosystem and customized AI chips, and those developing efficient architectures like Mixture-of-Experts (MoE).

| Sector | Pre-Value Era AI Spend | Post-Value Era AI Spend Focus | Expected Growth Driver |
|---|---|---|---|
| Generic Cloud API | 70% of enterprise budget | 30% | Cost optimization, replaced by vertical solutions |
| Vertical AI Solutions | 20% | 50% | Measurable ROI in productivity (e.g., 20% faster design cycles) |
| AI Agent Platforms | 5% | 15% | Automation of complex multi-step tasks |
| Training & Research | 5% | 5% | Focus shifts to data curation & new architectures |

Data Takeaway: The projected budget reallocation shows a dramatic hollowing out of spending on generic APIs, with capital flooding into sector-specific solutions and automation platforms. This will force generalist AI providers to either develop vertical expertise or become low-margin infrastructure utilities.

Risks, Limitations & Open Questions

The transition is fraught with challenges. First, the Talent Mismatch: China's AI talent pool is heavily weighted towards algorithm research and engineering, not product management, UX design, or domain expertise in fields like biology or material science. Bridging this gap is essential for building valuable applications.

Second, Sustainable Monetization remains an open question. The API-as-a-service model faces intense price competition and commoditization. Subscription models for consumer AI have yet to prove themselves at scale in China. The most promising path—embedding AI into high-margin enterprise software or taking a revenue share on efficiency gains—is complex and slow to implement.

Third, Geopolitical and Compute Constraints persist. Restrictions on advanced chip imports create a long-term ceiling on the scale and efficiency of training runs, making architectural ingenuity not just a competitive advantage but a survival necessity. This could accelerate innovation in efficient models but also risk creating a technological lag in the most compute-intensive research areas like world models.

Finally, there is the risk of Premature Specialization. Focusing too early on narrow verticals could cause Chinese AI to miss the next fundamental breakthrough—a paradigm shift as significant as the transformer itself. Balancing applied value creation with continued investment in ambitious, long-term research is a delicate act.

AINews Verdict & Predictions

The 140-trillion-token mark is not an achievement to celebrate, but a warning sign to heed. The age of competing on scale is conclusively over. Our verdict is that the Chinese AI industry has the technical prowess to make the transition but lacks the commercial and product design maturity to guarantee success.

We predict the following developments over the next 18-24 months:

1. Consolidation and Shakeout: At least two major independent model companies will be acquired or pivot drastically, as funding dries up for those without a clear path to value. The winners will be those attached to robust cloud ecosystems (Alibaba, Tencent) or those that have carved out unassailable vertical niches (iFlytek in certain sectors).

2. The Rise of the "AI Solution Integrator": A new class of company will emerge, not building foundation models, but specializing in composing open-source models, agent frameworks, and proprietary tools to solve complex enterprise problems. They will be the bridge between raw AI capability and business value.

3. Hardware-Software Co-Design Becomes Critical: Success will increasingly depend on tight integration with domestic AI silicon, like those from Biren Technology and Cambricon. We will see models explicitly architected for the strengths and limitations of Chinese chips, creating a distinct technical stack.

4. A Breakthrough in Agent Commerce: The first widely adopted, revenue-generating AI agent will emerge not in a chatbot, but in a domain like cross-border e-commerce logistics or software testing, where it can autonomously navigate complex digital systems to achieve a clear financial outcome.

The imperative is clear. The companies that thrive will be those that stop asking "How can we make our model bigger?" and start asking "What costly, complex process can our AI make obsolete?" The measure of success will shift from leaderboard scores to balance sheets and productivity metrics.

常见问题

这次模型发布“Beyond 140 Trillion Tokens: Why China's AI Must Shift from Scale to Value Creation”的核心内容是什么?

The Chinese AI landscape is undergoing a profound strategic realignment. The collective push to amass training data has culminated in a symbolic 140-trillion-token threshold, a tes…

从“DeepSeek-V2 MLA architecture explained”看,这个模型发布为什么重要?

The 140-trillion-token milestone represents a quantitative ceiling for purely linguistic scaling. Research from pioneers like DeepSeek's CEO, Liang Ya, and scholars such as Tsinghua's Tang Jie suggests that returns on de…

围绕“Qwen2.5 vs Yi-Large benchmark comparison 2024”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。