CoreWeaveのパラドックス:AI計算能力を赤字で販売、Nvidiaの主力事業はいつまで持つのか?

May 2026
AI infrastructureNvidiaArchive: May 2026
Nvidia GPUクラウドサービスの主要プレイヤーであるCoreWeaveは、収益の爆発的成長と損失の拡大が同時に進むという財務上のパラドックスに陥っています。AINewsの分析は、GPUレンタルという「ピック&シャベル」ビジネスが、見かけほど儲からないという厳しい現実を明らかにしています。
The article body is currently shown in English by default. You can generate the full version in this language on demand.

CoreWeave has emerged as a poster child for the AI infrastructure boom, securing massive Nvidia GPU allocations and signing multi-billion dollar contracts. Yet its financial filings tell a troubling story: in 2024, revenue surged to over $500 million, but net losses also ballooned past $300 million. The company's core problem is a high-leverage, asset-heavy model where it must borrow heavily to purchase the latest Nvidia chips, build data centers, and pay for power—all before earning a single dollar in rental income. As cloud hyperscalers like AWS, Google Cloud, and Microsoft Azure slash GPU rental prices and bundle them with proprietary software, CoreWeave is squeezed. It lacks the high-margin services (like managed databases or model serving) that lock in customers and justify premium pricing. The result: CoreWeave is effectively Nvidia's most leveraged 'workhorse,' bearing the capital risk of the AI buildout while earning only thin margins. This analysis explores the technical, financial, and competitive forces that make CoreWeave's position precarious, and predicts that pure-play GPU rental will consolidate or vanish as the market matures.

Technical Deep Dive

CoreWeave's business is built on a deceptively simple technical stack: massive clusters of Nvidia H100 and B200 GPUs connected via high-speed InfiniBand networking, housed in colocation data centers. The company's core differentiator has been speed—it claims to provision GPU clusters faster than the hyperscalers by using a 'bare metal' approach with minimal virtualization overhead. However, this simplicity is also its Achilles' heel.

The Architecture Trap: CoreWeave's infrastructure lacks the deep software integration that makes AWS's SageMaker or Google's Vertex AI sticky. A customer can spin up a CoreWeave H100 instance in minutes, but they get no managed inference engine, no vector database, no fine-tuning pipeline. The company recently open-sourced a Kubernetes operator for GPU scheduling, but this is a commodity offering. Compare this to AWS's Trainium2 chips paired with the Neuron SDK, which offers a 30-40% cost reduction for specific workloads—but only if you stay in the AWS ecosystem.

Financial Mechanics of a GPU Rental: The unit economics are brutal. An H100 GPU costs roughly $30,000. With a 3-year depreciation schedule and 80% utilization, the monthly cost per GPU is ~$1,200 just for hardware. Add data center power ($0.10/kWh), cooling, networking, and staff, and the break-even rental price is around $2.50-3.00 per GPU-hour. Current market rates for H100 on-demand have dropped from $4.00/hour in 2023 to $2.00-2.50/hour in early 2025, driven by oversupply. CoreWeave's 2024 gross margin was just 18%, compared to AWS's estimated 50%+ on compute services.

| GPU Rental Cost Breakdown | CoreWeave (est.) | AWS P5 (est.) |
|---|---|---|
| Hardware Depreciation (3yr) | $1.20/hr | $0.80/hr (volume discount) |
| Power & Cooling | $0.40/hr | $0.35/hr |
| Networking & Colo | $0.30/hr | $0.20/hr |
| Software & Support | $0.10/hr | $0.50/hr (includes SageMaker) |
| Total Cost | $2.00/hr | $1.85/hr |
| Market Rental Price | $2.20/hr | $3.50/hr |
| Gross Margin | ~9% | ~47% |

Data Takeaway: CoreWeave's cost structure is inherently higher per GPU than hyperscalers due to lack of volume discounts on hardware and a less efficient power/cooling footprint. Yet it must price lower to compete, crushing margins. The hyperscalers can subsidize GPU rentals with high-margin software services; CoreWeave cannot.

A relevant open-source project to watch is the Kubernetes GPU Operator by Nvidia (GitHub: Nvidia/gpu-operator, 5k+ stars), which automates GPU management in Kubernetes clusters. CoreWeave contributes to this, but it's a commodity—every cloud provider uses it. The company's own open-source tool, CoreWeave Cloud CLI, has only 200 stars, indicating limited community traction.

Key Players & Case Studies

The Hyperscaler Squeeze: The three major cloud providers—AWS, Google Cloud, and Microsoft Azure—are CoreWeave's direct competitors and also its suppliers (they lease data center space to CoreWeave in some cases). Each has a distinct strategy:

- AWS: Offers the P5 instance family with H100s, but more importantly, bundles them with SageMaker, Bedrock, and proprietary chips (Trainium2). AWS's 2024 GPU revenue was estimated at $15B, with margins above 40%.
- Google Cloud: Uses its TPU v5p and H100s, but its real weapon is the deep integration with Gemini and Vertex AI. Google also offers committed-use discounts of up to 50% for 3-year contracts, locking customers in.
- Microsoft Azure: The largest investor in CoreWeave (via a $5B compute deal in 2023), but also its biggest competitor. Azure's ND H100 v5 series is priced aggressively, and Microsoft bundles it with OpenAI services and Copilot.

The Nvidia Dynamic: Nvidia is both CoreWeave's lifeline and its master. Nvidia prioritizes CoreWeave for early access to new chips (like B200) because CoreWeave buys in huge volumes and pays upfront. However, Nvidia also sells directly to hyperscalers and emerging competitors like Lambda Labs and Vultr. Nvidia's 2024 data center revenue hit $47.5B, and it has no incentive to protect CoreWeave's margins—it wants maximum GPU sales volume.

| Competitor Strategy Comparison | CoreWeave | AWS | Lambda Labs |
|---|---|---|---|
| GPU Access | Priority (Nvidia partner) | Volume-based | Secondary |
| Software Ecosystem | Minimal (Kubernetes only) | SageMaker, Bedrock | Jupyter, basic ML tools |
| Pricing Model | On-demand + reserved | On-demand + reserved + spot | On-demand only |
| 2024 Est. GPU Instances | ~50,000 H100 | ~500,000 H100 | ~15,000 H100 |
| Gross Margin | ~18% | ~50% | ~25% |

Data Takeaway: CoreWeave's 'priority access' to Nvidia chips is its only real moat, but that advantage is eroding as Nvidia increases production. Meanwhile, Lambda Labs, with a leaner operation, achieves better margins despite having worse GPU access. This suggests CoreWeave's cost structure is bloated.

Industry Impact & Market Dynamics

The GPU cloud market is projected to grow from $10B in 2024 to $40B by 2028 (CAGR 32%), but this masks a brutal bifurcation. The high-margin segment—managed AI services (model serving, fine-tuning, RAG pipelines)—will grow fastest, while pure GPU rental becomes a low-margin commodity. CoreWeave is trapped in the latter.

The Price War: In Q1 2025, H100 on-demand prices dropped 40% year-over-year. Spot instances on AWS now cost as little as $1.20/hour. CoreWeave cannot match this without bleeding cash. The company's debt load is staggering: it raised $12B in debt financing in 2024 alone, secured against its GPU inventory. At a 7% interest rate, that's $840M in annual interest payments—more than its entire 2024 revenue.

The 'Second-Hand' Threat: A new dynamic is emerging: companies that bought H100s during the 2023 shortage are now reselling compute on secondary markets. Startups like RunPod and JarvisLabs offer H100s at $1.50/hour by using idle capacity. This further depresses prices.

| Market Segment | 2024 Size | 2028 Projected | CAGR | Margin Profile |
|---|---|---|---|---|
| Pure GPU Rental | $4.5B | $12B | 22% | Low (10-20%) |
| Managed AI Services | $3.0B | $18B | 43% | High (40-60%) |
| Custom Chip Rental (TPU, Trainium) | $2.5B | $10B | 32% | Medium (25-35%) |

Data Takeaway: The market is shifting toward managed services, where CoreWeave has zero presence. Without building a software layer, it will be relegated to the slowest-growing, lowest-margin segment.

Risks, Limitations & Open Questions

Debt Trap: CoreWeave's debt-to-equity ratio is estimated at 8:1 (vs. AWS's 0.5:1). If GPU demand softens even 10%, the company could face a liquidity crisis. The 2024 financials show negative free cash flow of $1.2B.

Nvidia's B200 Transition: The B200 GPU requires 700W per chip (vs. H100's 700W for a full server), and new cooling infrastructure. CoreWeave must raise more debt to upgrade its fleet, even as existing H100s depreciate faster than expected.

Customer Concentration: A single customer—an unnamed AI startup—accounted for 40% of CoreWeave's 2024 revenue. If that customer builds its own compute or switches to a hyperscaler, CoreWeave's revenue could collapse.

The 'AI Winter' Risk: If AI model training demand plateaus (as some researchers predict post-2026), the GPU oversupply will intensify. CoreWeave's long-term contracts with Nvidia obligate it to buy chips regardless of demand.

AINews Verdict & Predictions

CoreWeave is a cautionary tale of the AI infrastructure gold rush. It has mastered the art of raising capital and buying GPUs, but it has failed to build a defensible business. Our editorial judgment is clear:

1. CoreWeave will be acquired within 18 months. The most likely buyer is a hyperscaler (Microsoft or Google) seeking to absorb its GPU inventory and customer contracts. A secondary possibility is a private equity firm that will restructure the debt and run it as a cash-flow business.

2. The 'pure GPU rental' model is dead. By 2027, every major cloud provider will offer GPU compute only as part of a broader AI platform. Standalone GPU rental will be a niche for small, low-cost providers like Lambda Labs.

3. Nvidia will not rescue CoreWeave. Nvidia's interest is in selling chips, not propping up middlemen. If CoreWeave fails, Nvidia will simply sell those GPUs to hyperscalers or new entrants.

4. Watch for the 'software pivot.' CoreWeave's last hope is to acquire or build a managed AI service layer. Acquiring a company like Replicate (model serving) or OctoML (optimization) could add 20 points to gross margins. But time is running out.

The bottom line: CoreWeave proves that in the AI gold rush, selling shovels is not enough—you need to sell the maps, the guides, and the claim-staking services too. Pure compute is a race to the bottom, and CoreWeave is already losing.

Related topics

AI infrastructure222 related articlesNvidia30 related articles

Archive

May 20261217 published articles

Further Reading

中国生まれのCEOたちがAIチップ業界のリーダーシップを書き換える中国生まれやアジア系アメリカ人のCEOの波が半導体業界を再形成している。彼らはシリコンバレーのアルゴリズム、TSMCの工場、深圳の需要を橋渡しする稀有な能力を持ち、チップ開発サイクルを短縮し、AIインフラの構築と販売の方法を再定義している。NvidiaのAnthropicへの賭け:ジェンセン・フアンの直接AI戦略はクラウド巨人を打ち破れるか?Nvidia CEOジェンセン・フアンは、従来のクラウドモデルに宣戦布告し、自社をAWS、Azure、Google Cloudのサプライヤーではなく直接の競合相手として位置付けています。この分析では、Anthropicとの深い提携を軸としたAIの兆ドル現実:チップ戦争、データ倫理、そして測定可能な生産性向上AI業界は、壮大な野望と現実が衝突する決定的な瞬間を迎えています。NVIDIAが2027年までにAIチップ収益が兆ドル規模に達すると予測する一方、CursorとKimiを巻き込んだ学習データの出所を巡る大論争が発生。さらに、測定可能な生産性イーロン・マスク、地上AIモデルを放棄し軌道コンピューティングの未来に賭けるイーロン・マスクは急進的な戦略転換を実行している。地上ベースの大規模モデル競争を放棄し、宇宙コンピューティングに全力投球する。軌道データセンターと衛星GPUクラスターを活用することで、地上のエネルギーと土地の制約を回避し、レイテンシーとスケ

常见问题

这次公司发布“CoreWeave's Paradox: Selling AI Compute at a Loss, How Long Can Nvidia's Workhorse Survive?”主要讲了什么?

CoreWeave has emerged as a poster child for the AI infrastructure boom, securing massive Nvidia GPU allocations and signing multi-billion dollar contracts. Yet its financial filing…

从“CoreWeave debt financing terms and interest rates”看,这家公司的这次发布为什么值得关注?

CoreWeave's business is built on a deceptively simple technical stack: massive clusters of Nvidia H100 and B200 GPUs connected via high-speed InfiniBand networking, housed in colocation data centers. The company's core d…

围绕“CoreWeave vs Lambda Labs gross margin comparison”,这次发布可能带来哪些后续影响?

后续通常要继续观察用户增长、产品渗透率、生态合作、竞品应对以及资本市场和开发者社区的反馈。