數據勝於架構:為何記憶是AI真正的瓶頸

April 2026
AI architectureArchive: April 2026
普林斯頓研究員劉壯引發了一場風暴,他宣稱模型架構創新與數據品質和記憶機制相比幾乎無關緊要。他認為當前的智能體框架只是針對根本記憶缺陷的權宜之計,挑戰了業界的常規思維。
The article body is currently shown in English by default. You can generate the full version in this language on demand.

In a widely circulated interview, Princeton researcher Liu Zhuang delivered a blunt critique of the AI industry's priorities. He argues that the community has been chasing architectural innovations—Transformer variants, mixture-of-experts, state-space models—while neglecting the true bottleneck: data quality and the model's ability to remember. Liu contends that high-quality, structured, causally rich training data determines model performance far more than clever attention mechanisms. More provocatively, he dismisses the current agent frenzy as a superficial fix for memory failures. Retrieval-augmented generation (RAG) and external tool use, he says, are like pumping water out of a leaking boat instead of patching the hull. The industry, in his view, needs to pivot from architecture competitions to fundamental breakthroughs in data engineering, persistent memory modules, and lifelong learning. This perspective has resonated deeply because it names the uncomfortable truth: after two years of relentless architectural churn, progress on core capabilities like long-context understanding, knowledge updating, and cross-session memory has been painfully slow. The debate is forcing companies to reconsider where to allocate their R&D dollars—whether to keep scaling models or to invest in the less glamorous but more impactful work of data curation and memory systems.

Technical Deep Dive

Liu Zhuang's argument rests on a technical observation that many practitioners privately acknowledge but rarely voice publicly: the diminishing returns of architectural innovation. Over the past two years, we have seen dozens of proposed alternatives to the standard Transformer—Mamba (state-space models), Mixture-of-Experts (MoE) variants like Mixtral 8x7B, RWKV (linear attention), and various long-context extensions. Yet the empirical gains on core benchmarks have been marginal.

| Architecture | Max Context Length | MMLU Score | LongBench Score | Training Cost (relative to GPT-3) |
|---|---|---|---|---|
| GPT-4 (Transformer) | 128K | 86.4 | 42.3 | 1.0x (baseline) |
| Claude 3.5 Sonnet | 200K | 88.3 | 44.1 | ~0.8x |
| Mixtral 8x7B (MoE) | 32K | 70.6 | 33.5 | ~0.3x |
| Mamba-2.8B (SSM) | 256K | 35.4 | 18.9 | ~0.05x |
| RWKV-14B | 32K | 49.2 | 22.7 | ~0.1x |

Data Takeaway: While SSMs and linear attention models reduce computational cost, they consistently underperform on knowledge-intensive tasks. The best results still come from scaled Transformers with massive, high-quality training data. Architecture alone does not bridge the gap.

Liu's core technical claim is that the model's memory—its ability to store, retrieve, and update factual knowledge—is fundamentally a data problem, not an architecture problem. The Transformer's attention mechanism is remarkably efficient at retrieving information that is well-represented in the training distribution. The failure modes occur when the training data is noisy, sparsely covers certain domains, or contains conflicting information. He points to the phenomenon of "recentcy bias" in long-context models: even with 128K token windows, models often fail to recall information from the middle of the context. This is not an architecture limitation per se, but a data distribution issue—the training data rarely requires the model to attend to mid-context information, so it never learns to do so effectively.

A relevant open-source project is the MemGPT (now Letta) repository on GitHub, which has garnered over 12,000 stars. MemGPT introduces a hierarchical memory system that allows LLMs to manage their own memory, deciding what to store in short-term vs. long-term storage. This is precisely the kind of memory-first approach Liu advocates. Another important repo is Memoripy (3,200 stars), which implements persistent memory layers for conversational agents. These projects are still experimental, but they represent a shift in thinking from "better architecture" to "better memory management."

Key Players & Case Studies

The debate has immediate implications for how companies are positioning their products. The clearest case study is the divergence between OpenAI and Anthropic. OpenAI has invested heavily in agent frameworks—Code Interpreter, GPTs with actions, and the upcoming Operator—essentially building external scaffolding to compensate for model memory limits. Anthropic, by contrast, has focused on long-context windows (200K tokens in Claude 3.5) and constitutional AI, implicitly betting that better data filtering and longer context will solve memory issues natively.

| Company | Approach | Key Product | Memory Strategy | Recent Funding/Revenue |
|---|---|---|---|---|
| OpenAI | Agent + RAG | GPT-4 Turbo, Code Interpreter | External tool use, vector DB retrieval | $13B revenue (2024 est.) |
| Anthropic | Long-context + data quality | Claude 3.5 Sonnet | 200K context window, constitutional training | $8.5B raised |
| Google DeepMind | Hybrid | Gemini 1.5 Pro | 1M token context + MoE architecture | $2B revenue (est.) |
| Cohere | Data-centric | Command R+ | RAG-native design, data flywheel | $500M raised |
| Mistral | MoE + open-weight | Mixtral 8x22B | Sparse activation for efficiency | $600M raised |

Data Takeaway: Companies pursuing long-context windows (Anthropic, Google) are implicitly validating Liu's data-centric thesis—they believe that giving the model more relevant data at inference time is more effective than architectural tricks. Meanwhile, agent-first companies (OpenAI) are betting that external memory is a necessary crutch.

A notable counterpoint is Google's Infini-Attention paper, which proposes a compressed memory mechanism within the attention layer itself. This is a genuine architectural innovation aimed at memory, but it has not yet been deployed in production. Liu would argue that such architectural solutions are premature—that the industry should first fix the data pipeline before adding complexity.

Industry Impact & Market Dynamics

Liu's critique arrives at a critical juncture. The AI industry is projected to spend over $100 billion on compute in 2024, with a significant portion going toward training ever-larger models. If Liu is correct, much of this spending is misallocated. The real bottleneck is not compute but data quality and memory persistence.

| Year | Global AI Training Spend ($B) | Data Engineering Spend ($B) | Memory/Storage Innovation Spend ($B) |
|---|---|---|---|
| 2022 | 45 | 8 | 2 |
| 2023 | 72 | 12 | 3 |
| 2024 (est.) | 105 | 18 | 5 |
| 2025 (projected) | 140 | 30 | 12 |

Data Takeaway: Data engineering spend is growing, but it still lags far behind training compute. Liu's argument suggests this ratio should shift dramatically—perhaps to 1:1 within three years. Companies that invest early in data infrastructure and memory systems could gain a structural advantage.

The shift would have major implications for the startup ecosystem. Companies like Weaviate, Pinecone, and Chroma (vector databases) are already positioned to benefit from a memory-first approach. But Liu's critique goes deeper: he argues that current RAG systems are too shallow. They retrieve chunks of text but do not integrate knowledge into the model's internal representations. The next wave of startups may focus on "memory-as-a-service"—persistent, updatable knowledge stores that models can query natively, rather than through clunky retrieval pipelines.

Risks, Limitations & Open Questions

Liu's thesis is compelling but not without risks. The most obvious counterargument is that architectural innovation has enabled the scaling laws that made modern LLMs possible. Without the Transformer, we would not have GPT-4. Dismissing architecture entirely is ahistorical.

Second, data quality is notoriously difficult to define and measure. What constitutes "high-quality, causally rich data"? Liu's prescription risks becoming a tautology—the best data is whatever makes the model perform best, which is circular. Without concrete metrics, companies may waste resources on data curation that yields no improvement.

Third, memory mechanisms have their own scaling challenges. Persistent memory requires storage, retrieval latency, and consistency guarantees. If every user interaction requires querying a massive memory store, inference costs could skyrocket. The trade-off between memory depth and inference speed is real.

Ethically, a model with perfect memory raises privacy concerns. If a model remembers every interaction, it could leak sensitive information or be used for surveillance. Liu's vision of lifelong learning must grapple with data retention policies, user consent, and the right to be forgotten.

Finally, there is the question of whether memory alone is sufficient for reasoning. Liu seems to imply that if the model remembers enough, reasoning will emerge naturally. But the experience with retrieval-augmented models shows that simply providing more facts does not guarantee correct reasoning—the model must also learn how to combine facts logically. This may require architectural innovations after all.

AINews Verdict & Predictions

Liu Zhuang has done the industry a service by forcing a conversation about priorities. His core insight—that data quality and memory are the real bottlenecks—is correct in spirit, even if overstated for effect. The AI community has been guilty of fetishizing architecture while neglecting the mundane but essential work of data curation.

Our predictions:

1. Within 12 months, at least two major foundation model companies will announce "memory-first" architectures that decouple knowledge storage from inference, inspired by projects like MemGPT. These will not replace Transformers but augment them with persistent memory layers.

2. Data engineering budgets will grow 3x faster than compute budgets over the next two years. Companies will realize that a 10% improvement in data quality yields more performance gain than a 10% increase in model parameters.

3. The agent hype will cool as practitioners internalize Liu's critique. We will see a shift from "build more agents" to "build better memory." The most successful agent frameworks will be those that integrate persistent memory natively, not those that pile on more tools.

4. A new category of "memory infrastructure" startups will emerge, offering persistent, updatable, and privacy-preserving memory stores for LLMs. These will be as essential as vector databases are today.

5. The architecture race will not end, but it will become more targeted. Instead of general-purpose architecture improvements, we will see specialized architectures for memory compression, retrieval, and update—essentially, hardware-software co-design for memory.

The industry's response to Liu's critique will be a litmus test. Those who dismiss it as contrarian posturing will miss the signal. Those who take it seriously—and invest accordingly—will build the next generation of truly capable AI systems.

Related topics

AI architecture23 related articles

Archive

April 20262884 published articles

Further Reading

Kimi K2.6 超越 Claude Design:開源 AI 重新定義創意巔峰Kimi K2.6 在基準設計任務中表現超越 Claude Design,標誌著開源 AI 能力的重大轉變。本分析將深入探討其技術創新、競爭影響,以及真正普及的世界級設計工具時代的來臨。從聊天機器人到行動者:為何AI的未來在於自主智能體,而非更大的模型一位前核心AI架構師提出關鍵性批判,認為產業已來到轉折點。一味追求更大、更會對話的模型,對實際效用的邊際效益已遞減。他們斷言,未來屬於自主智能體——即能夠規劃、行動的AI系統。世界模型驅動自動駕駛:T6廂型車標誌商用自駕新時代斑馬一型在矽谷全球具身智能創新大會上發表了T6自動駕駛廂型車,標誌著從實驗室研究到商業部署的關鍵轉變。透過融合世界模型與多模態大型語言模型,T6在協作環境中實現了類人推理能力。雲端巨頭「龍蝦」模型重塑AI權力格局,OpenAI的阿特曼無視訴訟現身力挺一家全球雲端運算巨頭發布了代號為「龍蝦」的大型語言模型,打破了基礎設施提供商與AI實驗室之間的傳統界線。OpenAI執行長山姆·阿特曼儘管面臨重大訴訟,仍以虛擬方式現身支持此舉,標誌著一場深刻的權力重組。

常见问题

这次模型发布“Data Over Architecture: Why Memory Is AI's True Bottleneck”的核心内容是什么?

In a widely circulated interview, Princeton researcher Liu Zhuang delivered a blunt critique of the AI industry's priorities. He argues that the community has been chasing architec…

从“data quality vs model architecture for AI performance”看,这个模型发布为什么重要?

Liu Zhuang's argument rests on a technical observation that many practitioners privately acknowledge but rarely voice publicly: the diminishing returns of architectural innovation. Over the past two years, we have seen d…

围绕“persistent memory mechanisms in large language models”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。