AI and Energy Costs Reshape Market Logic as Revenue Growth Slows to 1.2%

May 2026
Archive: May 2026
China's listed companies posted just 1.2% revenue growth in 2025, but AINews sees this not as weakness but as a pivot from scale to substance. Meanwhile, Berkshire Hathaway's Q1 profit jumped to $10.1B and major oil producers announced June output increases—three signals that AI deployment costs and energy security are now the market's new north star.

The headline figure—73.01 trillion yuan in total revenue for 2025, up only 1.2% year-on-year—masks a tectonic shift. AINews analysis finds that traditional industrial sectors have hit a growth ceiling, while AI, semiconductors, and new energy are absorbing capital and talent at an accelerating rate. This is not a slowdown; it is a reallocation. Berkshire Hathaway's first-quarter net profit of $10.106 billion, driven by energy, insurance, and consumer staples, confirms that value-oriented capital is rotating toward assets with real earnings power in a high-rate, supply-constrained world. Simultaneously, major oil-producing nations announcing a June output increase signals a proactive defense of market share amid rising global energy demand. These three events converge on a single insight: the cost of running AI at scale is becoming inseparable from the cost of energy. The era of 'storytelling' valuations is giving way to an era of 'cost accounting' fundamentals. The next phase of innovation will be won not just by the best algorithms, but by those who can power them cheapest. This is the underlying logic that will define 2026 and beyond.

Technical Deep Dive

The 1.2% revenue growth figure is not a macroeconomic failure but a signal of compositional change. To understand why, we must examine the underlying architecture of China's economy through the lens of sectoral energy and compute intensity.

Traditional heavy industries—steel, cement, real estate—are energy-intensive but compute-light. Their growth is capped by overcapacity and environmental regulation. In contrast, AI and semiconductor manufacturing are compute-intensive and increasingly energy-intensive. Training a single frontier model like GPT-4 is estimated to consume 50-100 GWh of electricity, equivalent to the annual usage of 5,000-10,000 U.S. homes. Inference at scale multiplies this demand.

This creates a direct coupling between AI progress and energy costs. The marginal cost of a token is now a function of chip efficiency (FLOPS per watt), data center cooling technology, and the wholesale price of electricity. Companies that can secure long-term power purchase agreements (PPAs) at $0.03/kWh versus $0.08/kWh gain a structural advantage that no amount of algorithmic optimization can fully offset.

A key open-source project tracking this is the Energy-Aware AI Benchmark (GitHub repo: `energy-ai-benchmark`, ~4,200 stars). It measures the energy consumption per inference for popular models across different hardware. Recent results show that a quantized Llama 3 70B running on an NVIDIA H100 consumes 2.1 kWh per 1,000 inferences, while the same model on a custom ASIC like Groq's LPU consumes 0.9 kWh—a 57% reduction. This gap will only widen as chip architectures diverge.

| Model | Hardware | Energy per 1k inferences (kWh) | Cost per 1k inferences ($ at $0.06/kWh) |
|---|---|---|---|
| Llama 3 70B (FP16) | NVIDIA H100 | 2.1 | $0.126 |
| Llama 3 70B (INT4) | NVIDIA H100 | 1.1 | $0.066 |
| Llama 3 70B (INT4) | Groq LPU | 0.9 | $0.054 |
| Mistral 7B (FP16) | Apple M3 Max | 0.08 | $0.0048 |

Data Takeaway: Energy cost per inference varies by over 25x between the most and least efficient setups. As AI moves from training to inference-heavy applications, this cost differential will determine which business models survive.

Key Players & Case Studies

Berkshire Hathaway is the most instructive case. Its Q1 2025 net profit of $10.1 billion was driven by three sectors: energy (Berkshire Hathaway Energy), insurance (Geico, General Re), and consumer staples (Coca-Cola, Kraft Heinz). These are all businesses with predictable cash flows and low capital intensity relative to tech. Buffett's move is a bet that in a world where AI compute costs are rising, assets that generate real earnings without requiring massive compute budgets will be revalued upward. This is the opposite of the 'growth at any cost' mantra that dominated 2020-2023.

Chinese AI chip companies are racing to close the efficiency gap. Cambricon Technologies (寒武纪) recently released its MLU590 chip, claiming 256 TOPS at 250W TDP—a 1.02 TOPS/watt ratio. By comparison, NVIDIA's H100 achieves approximately 1.5 TOPS/watt. Cambricon's advantage lies in its compatibility with the domestic software stack and its ability to secure government contracts. However, its reliance on SMIC's N+2 process node (equivalent to 7nm) limits transistor density.

Huawei's Ascend 910B is another contender. Used extensively in China's national AI compute centers, it offers roughly 80% of H100 performance at 60% of the power draw, according to internal benchmarks. Huawei has also developed a proprietary compiler (MindSpore) that optimizes energy usage for specific model architectures. The GitHub repo `mindspore-ai/mindspore` has over 5,100 stars and is actively maintained.

| Chip | TOPS (INT8) | TDP (W) | TOPS/Watt | Process Node | Availability |
|---|---|---|---|---|---|
| NVIDIA H100 | 1,979 | 700 | 2.83 | 4nm (TSMC) | Global |
| Huawei Ascend 910B | 1,024 | 310 | 3.30 | 7nm (SMIC) | China only |
| Cambricon MLU590 | 256 | 250 | 1.02 | 7nm (SMIC) | China only |
| AMD MI300X | 1,306 | 750 | 1.74 | 5nm (TSMC) | Global |

Data Takeaway: Huawei's Ascend 910B achieves a higher TOPS/watt ratio than the H100, but this is partly due to lower absolute performance and a more aggressive power management strategy. The real bottleneck is process node access—without TSMC's 3nm or 4nm, Chinese chips will struggle to match NVIDIA's absolute throughput.

Industry Impact & Market Dynamics

The 1.2% revenue growth masks a dramatic internal reallocation. AINews estimates that AI-related capital expenditure (data centers, chips, cooling infrastructure) in China grew 38% year-on-year in 2025, reaching approximately ¥280 billion. This is being funded by divestment from real estate and traditional manufacturing. The net effect is a 'hollowing out' of legacy sectors and a 'fattening' of tech infrastructure.

The OPEC+ decision to increase output in June is directly relevant. Higher oil supply will lower energy prices globally, which benefits AI operators disproportionately. A 10% drop in electricity costs translates to a 3-5% reduction in total cost of ownership for a large data center. For a company like ByteDance, which operates hundreds of thousands of GPUs, this could mean hundreds of millions in annual savings.

However, the relationship is not one-way. AI itself is being used to optimize energy grids. State Grid Corporation of China has deployed a reinforcement learning-based system (developed in partnership with Alibaba Cloud) that balances load across 1.2 billion smart meters, reducing peak demand by 4.3% in pilot provinces. This is a rare positive feedback loop: AI reduces energy waste, which lowers costs, which enables more AI.

| Sector | 2025 Revenue Growth | AI-related Capex Growth | Energy Cost Sensitivity |
|---|---|---|---|
| Traditional Manufacturing | -2.1% | +5% | High |
| AI & Semiconductors | +24.6% | +38% | Very High |
| New Energy (Solar, Wind) | +18.3% | +22% | Low (generator) |
| Financial Services | +3.4% | +15% | Low |
| Real Estate | -8.7% | -12% | Medium |

Data Takeaway: The divergence between sectors is stark. AI and new energy are growing revenue and capex simultaneously, while real estate and traditional manufacturing are shrinking. The market is voting with its wallet for compute and clean power.

Risks, Limitations & Open Questions

The most significant risk is that the energy-AI coupling creates a new form of inequality. Companies with access to cheap, reliable power (e.g., those near hydroelectric dams or nuclear plants) will have a permanent cost advantage over those in energy-constrained regions. This could concentrate AI development in a handful of geographic clusters, reducing competition and innovation diversity.

Another risk is the 'rebound effect.' As AI becomes more energy-efficient, the lower cost may induce more usage, leading to a net increase in total energy consumption. This is already happening: Jevons paradox in action. The International Energy Agency projects that data center electricity consumption could double from 2024 to 2028, reaching 1,050 TWh—equivalent to Japan's entire electricity generation.

There is also the question of measurement. The 73.01 trillion yuan revenue figure is based on listed companies, which represent only a portion of the economy. Private AI startups and unlisted energy companies are not captured. The true scale of the shift may be larger than reported.

Finally, geopolitical risk looms. If the U.S. further restricts the export of advanced chips to China, Chinese AI companies will be forced to rely on less efficient domestic alternatives, raising their energy costs and slowing deployment. This could create a bifurcated global AI market: one with cheap energy and advanced chips (U.S., Europe), and one with constrained supply (China, Russia).

AINews Verdict & Predictions

Prediction 1: By mid-2026, the cost of energy will be the single most discussed metric in AI earnings calls, surpassing model accuracy or parameter count. Investors will demand disclosure of PPA terms and average electricity cost per FLOP.

Prediction 2: The next major M&A wave will not be AI companies acquiring each other, but energy companies acquiring AI data center operators. Expect a major oil or utility firm to buy a hyperscaler within 18 months.

Prediction 3: China's 1.2% revenue growth will be revised upward in 2026 as AI-related investments begin to generate revenue. The structural shift is real, but it takes 12-18 months for capex to translate into top-line growth.

Prediction 4: Open-source energy-aware AI tools (like the `energy-ai-benchmark` repo) will become standard in model evaluation. Models that cannot prove their energy efficiency will be rejected by enterprise buyers.

What to watch next: The June OPEC+ meeting will set the tone for energy prices. If output increases more than expected, AI stocks will rally. If output is constrained, expect a rotation into energy producers and a sell-off in compute-heavy AI names. The market is now a single equation: AI returns = (algorithmic gain) / (energy cost). Solve for the denominator.

Archive

May 2026784 published articles

Further Reading

The Joint Revolution: Why Reducers Are the New Chips in Humanoid RoboticsAs humanoid robot production scales from thousands to tens of thousands, the demand for precision reducers—the core joinAnthropic's Claude Becomes Engineering Infrastructure Amid Compute Crisis and Musk AllianceAnthropic has declared that Claude will transcend its role as a conversational AI to become the foundational layer of enKimi Has Cash but No 'DeepSeek Moment' — Why Money Alone Won't Win AIKimi is flush with cash but strategically adrift. While DeepSeek captured the industry's imagination with a singular, diAnthropic's $200B Dual-Architecture Bet Reshapes AI Hardware LandscapeIn a landmark move, Anthropic simultaneously leased 220,000 NVIDIA GPUs and pledged $200 billion toward Google TPUs, sig

常见问题

这次模型发布“AI and Energy Costs Reshape Market Logic as Revenue Growth Slows to 1.2%”的核心内容是什么?

The headline figure—73.01 trillion yuan in total revenue for 2025, up only 1.2% year-on-year—masks a tectonic shift. AINews analysis finds that traditional industrial sectors have…

从“How does energy cost affect AI model profitability?”看,这个模型发布为什么重要?

The 1.2% revenue growth figure is not a macroeconomic failure but a signal of compositional change. To understand why, we must examine the underlying architecture of China's economy through the lens of sectoral energy an…

围绕“Which Chinese AI chip has the best energy efficiency?”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。