Token hóa GPU: Các thành phố đang biến sức mạnh tính toán thành loại tiền tệ đô thị mới như thế nào

April 2026
AI infrastructureArchive: April 2026
Các thành phố đang khám phá một vũ khí cạnh tranh mới: biến sức mạnh tính toán GPU nhàn rỗi thành các token kỹ thuật số có thể giao dịch. Mô hình này có thể giải phóng năng lực AI khổng lồ, cắt giảm chi phí cho các startup và tạo ra một vòng xoáy kinh tế tự củng cố. AINews phân tích công nghệ, các bên tham gia và cuộc đua xây dựng hệ sinh thái đầu tiên.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

The global race for AI dominance is shifting from model size to compute availability. A new paradigm is emerging: cities tokenizing their GPU compute power, transforming it from a static resource into a liquid, programmable asset. This approach directly addresses two critical bottlenecks in the AI economy: the massive underutilization of existing GPU clusters (estimates suggest 70-80% of enterprise GPUs sit idle at any given time) and the prohibitive upfront cost for small and medium enterprises (SMEs) and independent developers to access high-end compute. By issuing tokens that represent a claim on a specific amount of GPU compute time, cities can create a decentralized marketplace. Users contribute their idle hardware to a shared pool and are rewarded with tokens, while AI developers spend those tokens to run inference or fine-tune models. This 'compute-to-earn' model, pioneered by projects like io.net and Akash Network, is now being actively explored by municipal governments in Asia and the Middle East. The technical foundation relies on verifiable computing and zero-knowledge proofs to ensure that the compute delivered matches the token redeemed, preventing fraud. For city planners, the strategic calculus is clear: a city that hosts a deep, liquid compute token market becomes an indispensable hub for the next wave of AI applications—from real-time video generation to autonomous systems. The first city to successfully implement this at scale will not just attract AI talent and capital; it will own the infrastructure layer of the future digital economy. AINews has learned that at least three major cities are in advanced stages of planning such initiatives, with pilot programs expected within the next 12 months.

Technical Deep Dive

The core innovation of compute tokenization is not simply putting a GPU on a blockchain. It requires a sophisticated, multi-layered technical stack that bridges the physical hardware with a trustless financial layer. The architecture can be broken down into three primary components:

1. The Orchestration Layer: This is the software that aggregates heterogeneous GPU resources—from consumer-grade RTX 4090s in a home rig to enterprise H100s in a data center. It must handle dynamic resource discovery, job scheduling, and containerization. The open-source project Kubernetes is often the base, but specialized forks like KubeEdge are used to manage edge devices. A more direct example is the Akash Network (GitHub: `ovrclk/akash`, ~3,000 stars), which uses a custom provider daemon to register compute resources and a decentralized order-book for matching supply and demand.

2. The Verification Layer: This is the most critical and technically challenging part. How does a buyer know the seller actually ran their AI job on a real GPU? The solution involves Verifiable Computing (VC) and Zero-Knowledge Proofs (ZKPs) . Specifically, projects like Gensyn (a decentralized compute network) use a protocol where the provider must submit a cryptographic proof of work—essentially a receipt that proves a specific computation was performed correctly. This often involves zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge) to compress the proof into a size that can be verified cheaply on-chain. The trade-off is that generating these proofs adds overhead (typically 5-15% of the compute job's duration), but it is essential for trust in a permissionless system.

3. The Tokenization Layer: This is the smart contract infrastructure that mints, burns, and trades compute tokens. The token is typically a ERC-20 or SPL (on Solana) token. The key design choice is whether the token is a utility token (pegged to a fixed amount of compute, e.g., 1 Token = 1 hour of H100 compute) or a security token (representing a share of a GPU cluster's future revenue). Most city-scale initiatives are leaning toward the utility model to avoid regulatory hurdles. The smart contract must also handle slashing—penalizing providers who fail to deliver the promised compute or submit fraudulent proofs.

Benchmarking the Verification Overhead:

| Verification Method | Proof Generation Time (per 1M tokens of LLM inference) | Proof Size | Trust Assumption | Maturity |
|---|---|---|---|---|
| zk-SNARKs (Groth16) | 15-30 seconds | ~200 bytes | Requires trusted setup | High (used in Zcash) |
| zk-STARKs | 60-120 seconds | ~50 KB | Transparent (no trusted setup) | Medium (StarkNet) |
| Optimistic Verification | 0 seconds (off-chain) | N/A | Requires dispute window (e.g., 7 days) | High (Arbitrum) |
| TEE-based (Intel SGX) | <1 second | N/A | Trusts Intel hardware | Low (recent attacks on SGX) |

Data Takeaway: The table reveals a fundamental trade-off between speed and trust. For a city-scale system that needs to handle thousands of concurrent jobs, the overhead of zk-SNARKs (15-30 seconds) is acceptable for long-running training jobs but problematic for real-time inference. A hybrid approach—using TEEs for low-latency inference and ZKPs for high-value training jobs—is the most likely architectural outcome for early city pilots.

Key GitHub Repository: The `flashbots/mev-geth` repo (over 2,000 stars) is not directly about compute, but its work on PBS (Proposer-Builder Separation) is being adapted by projects like Spheron Network to create a fair, decentralized market for compute block space, preventing large GPU farms from dominating the market.

Key Players & Case Studies

The field is currently a battleground between decentralized startups and established cloud providers, with cities acting as the new catalyst.

Decentralized Compute Networks (The Pioneers):

- io.net: A Solana-based network that has aggregated over 200,000 GPUs (including many from crypto miners). It focuses on providing cheap compute for AI inference and fine-tuning. Its token, IO, has a market cap of roughly $300M. The key weakness is the quality of hardware—many GPUs are consumer-grade and unreliable for enterprise workloads.
- Akash Network: Built on Cosmos, Akash is more established (launched 2020) and focuses on general cloud compute, not just AI. It has a strong track record of uptime but lacks the specialized AI libraries (e.g., CUDA optimizations) that io.net offers.
- Gensyn: A UK-based project that raised $50M from a16z. It is the most technically ambitious, building a complete protocol for deep learning training verification. It is still in testnet but has the most advanced ZKP implementation.

Cloud Incumbents (The Defenders):

- AWS, GCP, Azure: They are watching this space closely. Their advantage is reliability and existing enterprise relationships. Their weakness is pricing—they cannot match the marginal cost of a decentralized network where hardware is already paid for. They are experimenting with spot instances and committed use discounts to compete.
- CoreWeave: A specialized cloud provider that has raised over $1B in debt financing to build massive GPU clusters. It represents a middle ground—centralized but hyper-specialized for AI. Its IPO filing revealed that 70% of its revenue comes from a single customer (likely Microsoft/OpenAI), highlighting the concentration risk of the current model.

City-Level Initiatives (The New Entrants):

| City/Region | Status | Approach | Key Partner | Target Use Case |
|---|---|---|---|---|
| Dubai (UAE) | Pilot Phase (Q3 2025) | Government-backed token pegged to H100 compute hours | io.net, local sovereign wealth fund | Real-time Arabic LLM inference, smart city video analytics |
| Busan (South Korea) | Feasibility Study | Public-private partnership, token listed on Busan Digital Asset Exchange | Akash Network, LG CNS | SME AI training for manufacturing and logistics |
| Zug (Switzerland) | Regulatory Sandbox | Fully decentralized, no city backing, but favorable crypto laws | Gensyn, Ethereum Foundation | High-value research compute for ETH Zurich |

Data Takeaway: The table shows a clear divergence in strategy. Dubai is taking a top-down, state-backed approach to ensure liquidity and trust. Busan is more experimental, leveraging its existing digital asset exchange. Zug is letting the market build itself. The most successful model is likely to be a hybrid: strong city-level regulatory support (like Zug) combined with a liquidity guarantee from a state-backed fund (like Dubai).

Industry Impact & Market Dynamics

The tokenization of compute is not just a niche crypto trend; it represents a fundamental restructuring of the AI infrastructure market. The global cloud computing market is worth over $600B, with AI compute being the fastest-growing segment (estimated at $50B in 2024, growing at 40% CAGR). If even 10% of this market shifts to a tokenized model, it would create a $5B+ market for compute tokens within three years.

Market Growth Projections:

| Metric | 2024 (Est.) | 2026 (Projected) | 2028 (Projected) |
|---|---|---|---|
| Total GPU Hours Tokenized | 5M hours | 150M hours | 1.2B hours |
| Market Cap of Compute Tokens | $500M | $8B | $45B |
| Number of Active Compute Providers | 50,000 | 500,000 | 5M |
| Average Cost per H100 Hour (Tokenized) | $1.50 | $0.80 | $0.50 |

Data Takeaway: The projection assumes a 10x reduction in cost over four years, driven by the massive supply of idle hardware entering the market. This would undercut AWS's current H100 spot price of ~$3.50/hour by 85%. The implication is clear: companies that can afford to wait and use decentralized compute will have a massive cost advantage over those locked into centralized cloud contracts.

Second-Order Effects:

1. The Rise of the 'Compute Bank': Just as banks aggregate deposits and lend them out, new intermediaries will aggregate compute tokens and lend them to AI developers. This creates a new asset class for institutional investors.
2. Geographic Arbitrage: Compute will flow to where it is cheapest. Cities with cheap electricity (e.g., hydro in Quebec, nuclear in France) will become net exporters of compute tokens, while cities with high demand but expensive power (e.g., London, Tokyo) will be net importers.
3. AI Model Democratization: A startup with a $10,000 budget can now buy 20,000 hours of H100 compute on a tokenized market, versus only 2,850 hours on AWS. This will accelerate the number of small teams building foundational models.

Risks, Limitations & Open Questions

Despite the promise, the road to city-scale compute tokenization is fraught with risks.

1. Verification Fraud: The most existential risk. A malicious provider could run a job on a CPU instead of a GPU, or simply not run it at all, and still claim the token reward. Current ZKP systems are not yet mature enough to verify arbitrary deep learning graphs efficiently. The Gensyn team published a paper in 2023 showing that their protocol can verify a ResNet-50 training run with 99.9% accuracy, but the overhead was 20%. For more complex models like diffusion transformers, the overhead could be 50% or more, making it economically unviable.
2. Regulatory Uncertainty: Are compute tokens securities? The SEC has not provided guidance. If a city-backed token is deemed a security, it would require registration, killing the 'contribute-and-earn' model. The Busan project is specifically designed to comply with Korea's specific digital asset laws, which classify utility tokens differently from securities.
3. Centralization of Supply: The entire model relies on a large, distributed supply of GPUs. If a single entity (e.g., a large crypto mining farm) controls 40% of the network's hash power (compute power), it could manipulate the market or censor transactions. The io.net network has already faced criticism for having a highly concentrated supply, with the top 10 providers controlling over 30% of the compute.
4. Environmental Concerns: While tokenization can utilize idle hardware, it also incentivizes the purchase of new GPUs solely for 'compute mining.' If the token price is high enough, it could lead to a surge in GPU purchases, increasing e-waste and energy consumption. A study by the University of Cambridge estimated that if compute tokenization reaches 10% of the AI market, it could add 5 TWh of additional electricity demand annually.

AINews Verdict & Predictions

Compute tokenization is not a fad. It is the logical next step in the commoditization of AI infrastructure. The city that cracks this code will have an economic moat comparable to owning a major port or airport in the 20th century.

Our Predictions:

1. By 2027, at least one major city (population >5M) will have a functioning compute token market that processes over 1 million GPU hours per month. Dubai is the frontrunner due to its existing crypto-friendly regulations and sovereign wealth fund backing.
2. The first 'compute token crisis' will occur by 2026. A major verification failure—where a provider is discovered to have faked 10,000+ hours of compute—will cause a 50%+ crash in a major compute token's price. This will be a healthy correction that forces the industry to adopt more robust ZKP-based verification.
3. The biggest winner will not be a crypto startup, but a traditional cloud provider that adapts. Expect CoreWeave or Lambda Labs to launch their own tokenized compute product within 18 months, using their existing hardware and enterprise trust to dominate the market. They will call it 'Compute Credits 2.0' and avoid the word 'token' for regulatory reasons.
4. SMEs in AI will see their compute costs drop by 60-80% by 2028. This will trigger a Cambrian explosion of AI applications in verticals like drug discovery, climate modeling, and personalized education, which are currently priced out of the market.

What to Watch Next: The next 12 months are critical. Watch for the Busan Digital Asset Exchange listing its first compute token. Watch for Gensyn's mainnet launch and whether it can achieve verification overhead below 10%. And most importantly, watch for the first major city government to issue a formal 'Compute Tokenization Framework'—that will be the signal that the race has truly begun.

Related topics

AI infrastructure188 related articles

Archive

April 20262884 published articles

Further Reading

Mô hình 'Tôm Hùm' của Gã Khổng Lồ Đám Mây Định Hình Lại Cục Diện Quyền Lực AI, Altman của OpenAI Bất Chấp Kiện Tụng Xuất HiệnMột gã khổng lồ điện toán đám mây toàn cầu đã phát hành mô hình ngôn ngữ lớn của riêng mình, mang mật danh 'Tôm Hùm', phDeepSeek-V4 trên Huawei Cloud: Cơn địa chấn hạ tầng AI Trung QuốcDeepSeek-V4 đã ra mắt, và sự xuất hiện độc quyền trên Huawei Cloud không chỉ đơn thuần là một bản nâng cấp mô hình. Nó đTừ Silicon đến Syntax: Cuộc Chiến Hạ Tầng AI Chuyển Từ Tích Trữ GPU Sang Kinh Tế Token Như Thế NàoCuộc đua hạ tầng AI đã trải qua một sự thay đổi mô hình. Cạnh tranh không còn tập trung vào việc sở hữu phần cứng GPU khChiến lược Kiếm tiền từ Bộ nhớ đệm KV của Kimi: Biến Nút thắt Bộ nhớ AI thành Mô hình Kinh doanhTrong một bước chuyển chiến lược có ý nghĩa sâu sắc đối với ngành AI, Kimi đang định hình lại một trong những thách thức

常见问题

这篇关于“GPU Tokenization: How Cities Are Turning Compute Power into the New Urban Currency”的文章讲了什么?

The global race for AI dominance is shifting from model size to compute availability. A new paradigm is emerging: cities tokenizing their GPU compute power, transforming it from a…

从“How does GPU tokenization work technically?”看,这件事为什么值得关注?

The core innovation of compute tokenization is not simply putting a GPU on a blockchain. It requires a sophisticated, multi-layered technical stack that bridges the physical hardware with a trustless financial layer. The…

如果想继续追踪“What are the risks of tokenized compute markets?”,应该重点看什么?

可以继续查看本文整理的原文链接、相关文章和 AI 分析部分,快速了解事件背景、影响与后续进展。