Anthropic advierte a EE. UU.: la IA de China podría superar a Estados Unidos para 2028 sin una acción urgente

Hacker News May 2026
Source: Hacker NewsAnthropicArchive: May 2026
Anthropic ha emitido una severa advertencia a los responsables políticos de EE. UU.: sin una acción decisiva, las capacidades de IA de China podrían superar a las de Estados Unidos para 2028. La evaluación interna del laboratorio revela que los controles de exportación actuales son insuficientes frente a las masivas inversiones de Pekín en infraestructura computacional y talento.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

Anthropic, the AI safety and research lab founded by former OpenAI employees, has escalated the debate over US-China AI competition by presenting a concrete, data-driven timeline. According to internal assessments shared with policymakers, China is on track to achieve parity or even overtake the United States in frontier AI capabilities as early as 2028. The warning is not abstract: it is based on China's accelerating progress in large-scale model training, domestic chip development, and the sheer volume of engineering talent being mobilized. While US labs like OpenAI, Google DeepMind, and Anthropic itself still lead in fundamental research and cutting-edge model performance, the gap in applied AI, hardware integration, and scale of deployment is narrowing faster than many realize. The core issue is no longer just about banning advanced chips; it is about the entire ecosystem—China's state-backed push for AI chip self-sufficiency, its vast data advantages from a population of 1.4 billion, and its aggressive recruitment of top AI researchers. Anthropic's call for tighter export controls and a national investment surge reflects a growing consensus among industry leaders that the next two years will determine the global AI order for a decade. If the US response remains fragmented, we may witness the emergence of two separate AI worlds: one built on American open-source and proprietary models, the other on Chinese state-aligned systems. The stakes could not be higher—this race will define not just technological leadership, but the very governance frameworks of the future.

Technical Deep Dive

The core of Anthropic's warning rests on a technical reality: the scaling laws that have driven AI progress over the past five years are now well understood by both US and Chinese labs. The key differentiator is no longer architectural innovation alone, but the ability to train models at unprecedented scale—requiring massive clusters of GPUs, optimized interconnects, and efficient data pipelines.

China has made remarkable strides in building domestic AI chips that can substitute for NVIDIA's banned A100 and H100. The most notable example is Huawei's Ascend 910B, which, while still lagging in raw floating-point operations per second (FLOPS) and memory bandwidth, has been successfully integrated into large-scale training clusters. Reports indicate that Chinese labs have deployed clusters of up to 10,000 Ascend chips for training models in the 100-billion-parameter range. The key bottleneck is no longer chip availability but software stack maturity—CUDA remains the gold standard, and Huawei's MindSpore framework, while improving, still lacks the ecosystem depth of PyTorch or TensorFlow.

Another critical technical dimension is data. China's advantage in sheer volume of labeled data—from surveillance, e-commerce, social media, and government databases—is immense. However, quality remains a concern. US labs have pioneered techniques like reinforcement learning from human feedback (RLHF) and constitutional AI, which require high-quality human annotation. China has responded by scaling its own annotation workforce, estimated at over 500,000 people, and by developing synthetic data generation methods that reduce reliance on human labelers.

On the model architecture front, Chinese labs have been quick to adopt and adapt innovations from US research. The open-source release of Meta's LLaMA series has been a game-changer, allowing Chinese teams to fine-tune and build upon state-of-the-art architectures without starting from scratch. Notable Chinese models include Baidu's ERNIE 4.0, Alibaba's Qwen series, and the open-source InternLM from Shanghai AI Laboratory. These models now rival GPT-3.5 in many benchmarks and are closing the gap on GPT-4-level tasks.

Benchmark Comparison (Selected Models, as of Q2 2025):

| Model | Parameters | MMLU Score | HumanEval (Code) | Cost/1M tokens (inference) |
|---|---|---|---|---|
| GPT-4o (OpenAI) | ~200B (est.) | 88.7 | 87.2 | $5.00 |
| Claude 3.5 Sonnet (Anthropic) | — | 88.3 | 84.1 | $3.00 |
| Gemini 1.5 Pro (Google) | — | 87.8 | 83.5 | $3.50 |
| ERNIE 4.0 (Baidu) | ~100B (est.) | 82.1 | 71.4 | $1.20 |
| Qwen-72B (Alibaba) | 72B | 80.5 | 68.9 | $0.80 |
| InternLM-2 (Shanghai AI Lab) | 20B | 79.3 | 65.2 | Open-source |

Data Takeaway: While US models still lead by 6-8 points on MMLU and 15-18 points on code generation, Chinese models are rapidly improving and are significantly cheaper to run. The cost advantage—often 3-4x lower—allows Chinese companies to deploy AI at scale in ways that US firms cannot match, especially in price-sensitive markets.

For readers interested in the open-source side, the GitHub repository InternLM (over 15,000 stars) provides a full training and inference framework for large language models, including support for hybrid parallelism and efficient fine-tuning. Another key repo is ColossalAI (over 40,000 stars), which offers optimized training strategies for large models on limited hardware—a critical capability for Chinese labs facing chip constraints.

Key Players & Case Studies

The US-China AI race is not a monolith; it involves distinct players with different strategies and track records.

US Side:
- OpenAI remains the benchmark setter with GPT-4o, but its closed-source approach limits its influence in China. The company's decision to not release model weights has pushed Chinese labs to rely on open-source alternatives.
- Anthropic has positioned itself as the safety-conscious alternative, but its warning about China reflects a pragmatic recognition that safety cannot be achieved in a vacuum—if China builds unsafe AI first, the consequences are global.
- Google DeepMind is investing heavily in Gemini, but its corporate structure and slower deployment cycles have allowed Chinese competitors to catch up in specific domains like multimodal AI.
- NVIDIA is the linchpin: its chips power nearly all US AI training, and export controls have created a black market and spurred Chinese domestic alternatives.

China Side:
- Huawei is the most critical player. Its Ascend chips are now the primary alternative to NVIDIA in China, and the company has built a full-stack AI ecosystem including the MindSpore framework and ModelArts platform. However, production yields and software maturity remain challenges.
- Baidu has the deepest AI research heritage in China, with ERNIE 4.0 being the most widely deployed model in government and enterprise applications. Its advantage lies in vertical integration: it controls search, cloud, autonomous driving, and AI chips (Kunlun).
- Alibaba leverages its e-commerce and cloud dominance to deploy Qwen across millions of merchants. Its strength is in practical, cost-effective AI for commerce and logistics.
- Shanghai AI Laboratory represents the state-backed research arm, producing open-source models like InternLM that are freely available to Chinese developers. This accelerates the ecosystem in ways that closed US models cannot.

Comparative Strategy Table:

| Dimension | US Approach | China Approach |
|---|---|---|
| Chip strategy | Rely on NVIDIA; export controls | Domestic substitution (Huawei Ascend, Cambricon) |
| Model strategy | Closed-source (OpenAI), semi-open (Meta LLaMA) | Open-source (InternLM, Qwen) + state-aligned (ERNIE) |
| Talent pipeline | Top universities + global recruitment | Massive domestic engineering pool + returnees |
| Data advantage | High-quality, diverse but fragmented | Vast, centralized, but lower quality per unit |
| Deployment speed | Slower due to regulation and safety concerns | Fast, pragmatic, often with fewer guardrails |

Data Takeaway: China's strategy of open-source models and domestic chips creates a self-reinforcing ecosystem that can scale rapidly, while the US approach, though superior in quality, is more fragmented and vulnerable to supply chain disruptions.

Industry Impact & Market Dynamics

The implications of Anthropic's warning extend far beyond geopolitics. The AI industry is bracing for a potential bifurcation of the global market into two incompatible technology stacks.

Market Size Projections:

| Region | AI Market Size 2024 | Projected 2028 | CAGR |
|---|---|---|---|
| United States | $120B | $350B | 24% |
| China | $45B | $180B | 32% |
| Rest of World | $60B | $150B | 20% |

Data Takeaway: China's AI market is growing at a significantly faster rate (32% vs 24%), driven by state investment and lower deployment costs. By 2028, China could represent over 25% of the global AI market, up from 20% in 2024.

Funding Dynamics:
- US AI startups raised $45B in 2024, but a growing share is going to defense and enterprise applications rather than foundational research.
- Chinese AI startups raised $15B in 2024, but state-backed entities like the National AI Development Fund have committed an additional $50B over five years.
- The key difference is that Chinese funding is more concentrated on hardware and infrastructure, while US funding is more dispersed across applications.

Business Model Divergence:
- US companies are moving toward subscription and API-based models (e.g., ChatGPT Plus, Claude Pro), with high margins but slower adoption.
- Chinese companies are embedding AI into existing platforms (e.g., Alibaba's Qwen in Taobao, Baidu's ERNIE in search), monetizing through increased engagement rather than direct AI fees.
- This means Chinese AI is more pervasive but less profitable per user, while US AI is more profitable but less ubiquitous.

Risks, Limitations & Open Questions

Anthropic's warning, while urgent, is not without its own limitations and risks.

Overreliance on Scaling Laws: The assumption that China will catch up by scaling existing architectures may be flawed. If the next breakthrough requires architectural innovation (e.g., new attention mechanisms, neuro-symbolic integration), US labs with deeper research traditions may retain an edge. China's strength is in engineering and optimization, not fundamental discovery.

Export Control Evasion: Current controls have already been circumvented through third-party countries, shell companies, and smuggling. A more aggressive control regime could trigger a decoupling that harms US companies more than China—NVIDIA alone lost an estimated $5B in revenue from China in 2024.

Talent Flow: The US still attracts the best AI researchers globally. However, increasing visa restrictions and a hostile political climate are driving some Chinese-born researchers back to China. If this trend accelerates, the talent gap could narrow faster than hardware improvements.

Ethical Concerns: China's AI development is less constrained by safety and ethical considerations. This allows faster deployment but also increases the risk of catastrophic failures—biased models, surveillance overreach, or uncontrolled autonomous systems. A race to the bottom in safety standards could harm everyone.

Open Question: Can the US maintain its lead by focusing on quality and safety, or will it be forced to compromise these values to keep pace with China's speed? This is the central dilemma of the next three years.

AINews Verdict & Predictions

Anthropic's 2028 timeline is not alarmist—it is a sober, data-driven assessment that should be taken seriously. But the solution is not simply more export controls. The US needs a comprehensive national AI strategy that includes:

1. Massive investment in domestic chip manufacturing (beyond the CHIPS Act) to reduce reliance on Taiwan and ensure supply chain security.
2. A national AI compute infrastructure that provides subsidized access to compute for researchers and startups, similar to what China's government is doing.
3. Streamlined visa pathways for AI talent to reverse the brain drain.
4. International alliances to set global standards for AI safety and interoperability, preventing a fragmented AI world.

Our Predictions:
- By 2027, China will achieve parity with the US in model performance on standard benchmarks (MMLU, HumanEval), but US models will still lead in safety, alignment, and reliability.
- By 2028, the global AI market will be effectively split into two ecosystems: one centered on US hardware (NVIDIA) and software (OpenAI, Anthropic, Google), and one centered on Chinese hardware (Huawei) and open-source models.
- The most likely outcome is not a single winner, but a prolonged coexistence that forces companies and governments to choose sides—with significant economic and security consequences.
- The next major breakthrough—whether in reasoning, multimodality, or agentic AI—will determine which ecosystem gains a lasting advantage. The race is not over, but the window for decisive action is closing fast.

What to Watch: The next 12 months will be critical. Watch for: (1) NVIDIA's next-generation chip (Rubin) and whether it can be exported to China under new rules; (2) Huawei's ability to scale Ascend production to meet demand; (3) the release of GPT-5 and whether it widens the gap or is matched by Chinese models within months; (4) any major AI safety incident in China that could shift the global narrative.

More from Hacker News

Transfa: El Protocolo Efímero de Transferencia de Archivos que Transforma los Flujos de Trabajo de Agentes de IAIn the rapidly evolving landscape of AI agent orchestration and continuous deployment, a long-overlooked pain point has Cchost libera la codificación paralela con IA: una máquina, múltiples agentes ClaudeAINews has uncovered Cchost, an open-source project that fundamentally rethinks how developers interact with AI coding aEl antídoto contra la ansiedad por la IA es más IA: una apuesta psicológica calculadaPublic anxiety over artificial intelligence has reached an all-time high, driven by fears of job displacement, autonomouOpen source hub3452 indexed articles from Hacker News

Related topics

Anthropic165 related articles

Archive

May 20261671 published articles

Further Reading

Anthropic reescribe Bun en Rust: la IA acelera su propia evolución de infraestructuraAnthropic ha integrado una versión reescrita en Rust del runtime JavaScript Bun en su infraestructura central, utilizandAnthropic destrona a OpenAI en la IA empresarial: la confianza gana la coronaAnthropic ha superado a OpenAI en participación de mercado de IA empresarial por primera vez, reclamando el 47% de las iEl control de ratón por IA de Anthropic: de chatbot a agente digital autónomoAnthropic ha presentado una revolucionaria herramienta de IA que controla directamente el cursor del ratón del usuario, Cuando la IA se encuentra con lo divino: por qué Anthropic y OpenAI buscan la bendición religiosaEn una serie de reuniones privadas, ejecutivos de Anthropic y OpenAI se sentaron con líderes religiosos globales para de

常见问题

这次模型发布“Anthropic Warns US: China AI Could Surpass America by 2028 Without Urgent Action”的核心内容是什么?

Anthropic, the AI safety and research lab founded by former OpenAI employees, has escalated the debate over US-China AI competition by presenting a concrete, data-driven timeline.…

从“How will China's AI chip self-sufficiency affect global supply chains?”看,这个模型发布为什么重要?

The core of Anthropic's warning rests on a technical reality: the scaling laws that have driven AI progress over the past five years are now well understood by both US and Chinese labs. The key differentiator is no longe…

围绕“What specific export controls does Anthropic recommend?”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。