Technical Deep Dive
The 1.2% revenue growth figure is not a macroeconomic failure but a signal of compositional change. To understand why, we must examine the underlying architecture of China's economy through the lens of sectoral energy and compute intensity.
Traditional heavy industries—steel, cement, real estate—are energy-intensive but compute-light. Their growth is capped by overcapacity and environmental regulation. In contrast, AI and semiconductor manufacturing are compute-intensive and increasingly energy-intensive. Training a single frontier model like GPT-4 is estimated to consume 50-100 GWh of electricity, equivalent to the annual usage of 5,000-10,000 U.S. homes. Inference at scale multiplies this demand.
This creates a direct coupling between AI progress and energy costs. The marginal cost of a token is now a function of chip efficiency (FLOPS per watt), data center cooling technology, and the wholesale price of electricity. Companies that can secure long-term power purchase agreements (PPAs) at $0.03/kWh versus $0.08/kWh gain a structural advantage that no amount of algorithmic optimization can fully offset.
A key open-source project tracking this is the Energy-Aware AI Benchmark (GitHub repo: `energy-ai-benchmark`, ~4,200 stars). It measures the energy consumption per inference for popular models across different hardware. Recent results show that a quantized Llama 3 70B running on an NVIDIA H100 consumes 2.1 kWh per 1,000 inferences, while the same model on a custom ASIC like Groq's LPU consumes 0.9 kWh—a 57% reduction. This gap will only widen as chip architectures diverge.
| Model | Hardware | Energy per 1k inferences (kWh) | Cost per 1k inferences ($ at $0.06/kWh) |
|---|---|---|---|
| Llama 3 70B (FP16) | NVIDIA H100 | 2.1 | $0.126 |
| Llama 3 70B (INT4) | NVIDIA H100 | 1.1 | $0.066 |
| Llama 3 70B (INT4) | Groq LPU | 0.9 | $0.054 |
| Mistral 7B (FP16) | Apple M3 Max | 0.08 | $0.0048 |
Data Takeaway: Energy cost per inference varies by over 25x between the most and least efficient setups. As AI moves from training to inference-heavy applications, this cost differential will determine which business models survive.
Key Players & Case Studies
Berkshire Hathaway is the most instructive case. Its Q1 2025 net profit of $10.1 billion was driven by three sectors: energy (Berkshire Hathaway Energy), insurance (Geico, General Re), and consumer staples (Coca-Cola, Kraft Heinz). These are all businesses with predictable cash flows and low capital intensity relative to tech. Buffett's move is a bet that in a world where AI compute costs are rising, assets that generate real earnings without requiring massive compute budgets will be revalued upward. This is the opposite of the 'growth at any cost' mantra that dominated 2020-2023.
Chinese AI chip companies are racing to close the efficiency gap. Cambricon Technologies (寒武纪) recently released its MLU590 chip, claiming 256 TOPS at 250W TDP—a 1.02 TOPS/watt ratio. By comparison, NVIDIA's H100 achieves approximately 1.5 TOPS/watt. Cambricon's advantage lies in its compatibility with the domestic software stack and its ability to secure government contracts. However, its reliance on SMIC's N+2 process node (equivalent to 7nm) limits transistor density.
Huawei's Ascend 910B is another contender. Used extensively in China's national AI compute centers, it offers roughly 80% of H100 performance at 60% of the power draw, according to internal benchmarks. Huawei has also developed a proprietary compiler (MindSpore) that optimizes energy usage for specific model architectures. The GitHub repo `mindspore-ai/mindspore` has over 5,100 stars and is actively maintained.
| Chip | TOPS (INT8) | TDP (W) | TOPS/Watt | Process Node | Availability |
|---|---|---|---|---|---|
| NVIDIA H100 | 1,979 | 700 | 2.83 | 4nm (TSMC) | Global |
| Huawei Ascend 910B | 1,024 | 310 | 3.30 | 7nm (SMIC) | China only |
| Cambricon MLU590 | 256 | 250 | 1.02 | 7nm (SMIC) | China only |
| AMD MI300X | 1,306 | 750 | 1.74 | 5nm (TSMC) | Global |
Data Takeaway: Huawei's Ascend 910B achieves a higher TOPS/watt ratio than the H100, but this is partly due to lower absolute performance and a more aggressive power management strategy. The real bottleneck is process node access—without TSMC's 3nm or 4nm, Chinese chips will struggle to match NVIDIA's absolute throughput.
Industry Impact & Market Dynamics
The 1.2% revenue growth masks a dramatic internal reallocation. AINews estimates that AI-related capital expenditure (data centers, chips, cooling infrastructure) in China grew 38% year-on-year in 2025, reaching approximately ¥280 billion. This is being funded by divestment from real estate and traditional manufacturing. The net effect is a 'hollowing out' of legacy sectors and a 'fattening' of tech infrastructure.
The OPEC+ decision to increase output in June is directly relevant. Higher oil supply will lower energy prices globally, which benefits AI operators disproportionately. A 10% drop in electricity costs translates to a 3-5% reduction in total cost of ownership for a large data center. For a company like ByteDance, which operates hundreds of thousands of GPUs, this could mean hundreds of millions in annual savings.
However, the relationship is not one-way. AI itself is being used to optimize energy grids. State Grid Corporation of China has deployed a reinforcement learning-based system (developed in partnership with Alibaba Cloud) that balances load across 1.2 billion smart meters, reducing peak demand by 4.3% in pilot provinces. This is a rare positive feedback loop: AI reduces energy waste, which lowers costs, which enables more AI.
| Sector | 2025 Revenue Growth | AI-related Capex Growth | Energy Cost Sensitivity |
|---|---|---|---|
| Traditional Manufacturing | -2.1% | +5% | High |
| AI & Semiconductors | +24.6% | +38% | Very High |
| New Energy (Solar, Wind) | +18.3% | +22% | Low (generator) |
| Financial Services | +3.4% | +15% | Low |
| Real Estate | -8.7% | -12% | Medium |
Data Takeaway: The divergence between sectors is stark. AI and new energy are growing revenue and capex simultaneously, while real estate and traditional manufacturing are shrinking. The market is voting with its wallet for compute and clean power.
Risks, Limitations & Open Questions
The most significant risk is that the energy-AI coupling creates a new form of inequality. Companies with access to cheap, reliable power (e.g., those near hydroelectric dams or nuclear plants) will have a permanent cost advantage over those in energy-constrained regions. This could concentrate AI development in a handful of geographic clusters, reducing competition and innovation diversity.
Another risk is the 'rebound effect.' As AI becomes more energy-efficient, the lower cost may induce more usage, leading to a net increase in total energy consumption. This is already happening: Jevons paradox in action. The International Energy Agency projects that data center electricity consumption could double from 2024 to 2028, reaching 1,050 TWh—equivalent to Japan's entire electricity generation.
There is also the question of measurement. The 73.01 trillion yuan revenue figure is based on listed companies, which represent only a portion of the economy. Private AI startups and unlisted energy companies are not captured. The true scale of the shift may be larger than reported.
Finally, geopolitical risk looms. If the U.S. further restricts the export of advanced chips to China, Chinese AI companies will be forced to rely on less efficient domestic alternatives, raising their energy costs and slowing deployment. This could create a bifurcated global AI market: one with cheap energy and advanced chips (U.S., Europe), and one with constrained supply (China, Russia).
AINews Verdict & Predictions
Prediction 1: By mid-2026, the cost of energy will be the single most discussed metric in AI earnings calls, surpassing model accuracy or parameter count. Investors will demand disclosure of PPA terms and average electricity cost per FLOP.
Prediction 2: The next major M&A wave will not be AI companies acquiring each other, but energy companies acquiring AI data center operators. Expect a major oil or utility firm to buy a hyperscaler within 18 months.
Prediction 3: China's 1.2% revenue growth will be revised upward in 2026 as AI-related investments begin to generate revenue. The structural shift is real, but it takes 12-18 months for capex to translate into top-line growth.
Prediction 4: Open-source energy-aware AI tools (like the `energy-ai-benchmark` repo) will become standard in model evaluation. Models that cannot prove their energy efficiency will be rejected by enterprise buyers.
What to watch next: The June OPEC+ meeting will set the tone for energy prices. If output increases more than expected, AI stocks will rally. If output is constrained, expect a rotation into energy producers and a sell-off in compute-heavy AI names. The market is now a single equation: AI returns = (algorithmic gain) / (energy cost). Solve for the denominator.