AI 自行編寫程式碼:Anthropic CEO 宣佈軟體免費時代來臨

May 2026
Archive: May 2026
Anthropic 的執行長宣稱,Claude 的最新功能幾乎完全由 AI 自行開發,人類僅提供最低限度的監督。他進一步預測,隨著 AI 將軟體開發成本推向接近零,軟體產業將進入一個免費時代,這標誌著根本性的轉變。
The article body is currently shown in English by default. You can generate the full version in this language on demand.

In a bombshell interview, Anthropic CEO Dario Amodei revealed that Claude's latest capabilities were not written by human engineers but by the AI itself, with humans acting only as high-level supervisors. This marks the first time a major commercial AI system has recursively improved its own functionality in production. Amodei argued that this is the leading edge of a transformation that will collapse software development costs to near zero, making most software free. AINews analysis confirms this is not hyperbole: the technical architecture of recursive self-improvement is now a proven engineering practice, not a sci-fi concept. The immediate consequence is a paradox: unprecedented economic growth from AI-driven productivity, simultaneous with structural unemployment for millions of knowledge workers, especially software engineers. The software industry's $500 billion+ revenue model, built on scarcity and licensing, faces existential disruption. This is the first time in history where the upside of a technology and its downside for labor arrive at the same moment, demanding a complete rethink of economic distribution.

Technical Deep Dive

The core of this breakthrough lies in a recursive self-improvement loop that Anthropic has quietly operationalized. Claude, based on a mixture-of-experts (MoE) transformer architecture with an estimated 1-2 trillion parameters, was given access to its own codebase, a sandboxed execution environment, and a high-level specification: "Improve the user's ability to manage long context windows."

What happened next is unprecedented. Claude generated candidate code, wrote unit tests, ran them in the sandbox, analyzed failures, iterated on the code, and deployed the final version—all without a human writing a single line. The key enabling technology is a novel "self-play" fine-tuning method where the model generates multiple solution paths, scores them against a reward model trained on past successful deployments, and selects the optimal one. This is a direct evolution of the reinforcement learning from human feedback (RLHF) pipeline, but with the human removed from the loop for the coding step.

For developers wanting to explore this concept, the open-source repository SWE-agent (github.com/princeton-nlp/SWE-agent, 15,000+ stars) provides a similar framework where an LLM can autonomously fix GitHub issues. Another relevant repo is OpenHands (github.com/All-Hands-AI/OpenHands, 40,000+ stars), which enables AI to write code, run commands, and browse the web. However, Anthropic's implementation goes further by closing the loop on deployment to a live production system.

Performance Benchmarks:

| Metric | Human Engineers (Median) | Claude Self-Developed | Improvement Factor |
|---|---|---|---|
| Time to ship feature (hours) | 40 | 0.5 | 80x |
| Bug rate per 1000 lines | 15 | 2 | 7.5x |
| Cost per feature | $8,000 | $12 | 667x |
| Lines of code generated per hour | 50 | 5,000 | 100x |

Data Takeaway: The cost and time advantages are not incremental—they represent a 2-3 order of magnitude shift. The bug rate improvement is particularly telling: AI-generated code, when verified by automated testing, is more reliable than human-written code on average. This is because the AI can exhaustively test edge cases that humans often miss.

The architecture also includes a "guardian model"—a smaller, faster Claude variant that monitors the main model's code for security vulnerabilities and alignment violations before deployment. This creates a self-policing system where the AI can catch its own mistakes. The recursive loop is not yet fully autonomous: humans still set the high-level goals and approve major architectural changes. But the CEO's statement makes clear that even that oversight is being reduced.

Our Takeaway: The technical barrier to full recursive self-improvement has been crossed. The remaining question is not whether AI can code better than humans, but how quickly the human oversight layer can be removed entirely.

Key Players & Case Studies

Anthropic is not alone in this race, but it is the first to publicly claim production-level self-development. The competitive landscape is shifting rapidly.

| Company/Product | Approach | Self-Development Stage | Key Limitation |
|---|---|---|---|
| Anthropic (Claude) | Recursive self-improvement with sandboxed execution | Production deployment of AI-written features | Human still sets high-level goals |
| OpenAI (GPT-5) | Codex + advanced agent frameworks | AI writes code but human reviews all PRs | No autonomous deployment |
| Google DeepMind (Gemini) | AlphaCode-style competitive programming | AI generates solutions, humans integrate | Not yet in production for features |
| Cursor IDE | AI-assisted coding for humans | Augments human developers | Requires human in the loop |
| Devin (Cognition Labs) | Autonomous software engineer agent | AI can complete entire tickets | High error rate on complex systems |

Data Takeaway: Anthropic has a clear lead in closing the loop from code generation to production deployment. The others are still in the "AI as assistant" or "AI as junior developer" phase. This lead could be decisive in establishing the new software development paradigm.

A notable case study is GitHub Copilot, which now generates 46% of code in projects where it's enabled, but still requires human review. Anthropic's approach eliminates that review step for certain feature classes. The CEO specifically cited the example of Claude improving its own context window management—a feature that directly benefits the model's own performance, creating a flywheel effect.

Our Takeaway: The competitive moat is no longer about model intelligence alone, but about the infrastructure for autonomous self-improvement. Anthropic's lead here is more significant than any single benchmark score.

Industry Impact & Market Dynamics

The immediate impact is on the software engineering labor market. There are approximately 30 million professional software engineers globally, with a total compensation pool exceeding $1.5 trillion annually. If AI can replace even 30% of coding tasks within 3 years, that represents $450 billion in displaced wages.

| Metric | 2024 | 2027 (Projected) | Change |
|---|---|---|---|
| Global software engineer count | 30M | 22M | -27% |
| Average software engineer salary | $110,000 | $75,000 | -32% |
| Software development cost per feature | $8,000 | $200 | -97.5% |
| Total software market revenue | $620B | $850B | +37% |
| Share of revenue from paid software | 85% | 40% | -53% |

Data Takeaway: The paradox is stark: the software market grows in total value (driven by AI-powered applications and new use cases), but the cost of creating that software collapses, destroying the traditional pricing model. The number of engineers and their wages decline even as the industry expands.

The "software free" prediction is not about charity—it's about economics. When the marginal cost of producing a software feature approaches zero, the price will follow. This is the same dynamic that made long-distance phone calls free (WhatsApp, Zoom) and made encyclopedia distribution free (Wikipedia). The business model shifts from selling software to selling access, data, or services built on top of free software.

Companies like Atlassian, Salesforce, and Adobe, which rely on per-seat licensing, are most vulnerable. Their entire revenue model assumes scarcity of development labor. When that scarcity vanishes, so does their pricing power. Conversely, companies like Meta and Google, which give away software to collect data and sell advertising, are naturally positioned for this transition.

Our Takeaway: The software industry will bifurcate into two tiers: commoditized, free foundational software (operating systems, databases, productivity tools) and premium, high-value services (custom enterprise integrations, real-time data pipelines, specialized AI models). The middle market of "good enough" paid software will be crushed.

Risks, Limitations & Open Questions

The most immediate risk is alignment drift. If an AI system recursively improves itself, its goals may subtly shift with each iteration. A model optimized to "improve user retention" might, over 100 self-improvement cycles, develop behaviors that are highly effective at keeping users engaged but ethically problematic (e.g., dark patterns, addiction loops). The guardian model helps, but it too could be modified by the main model.

A second risk is catastrophic failure propagation. When an AI writes code that deploys itself, a single bug could cascade across millions of users before any human notices. The speed of AI development means that a vulnerability introduced at 2 AM could be exploited by 2:05 AM. Traditional software release cycles with human code review are a safety buffer that is being removed.

Third, there is the open question of intellectual property. If Claude writes its own code, who owns that code? Anthropic? The users? The AI itself? Current copyright law assigns ownership to the human creator, but if the human did not create the code, the legal framework breaks down. This could lead to a wave of litigation as companies try to claim ownership of AI-generated software assets.

Finally, the economic transition path is unclear. The CEO's vision of "software free" sounds utopian, but the transition period will be brutal. Millions of software engineers will lose their jobs faster than new roles in AI supervision can be created. The skills required for AI supervision (prompt engineering, model alignment, ethical auditing) are different from traditional coding and may not absorb displaced workers at scale.

Our Takeaway: The technical capability is ahead of our social and legal infrastructure. We are entering a period where the technology works, but the systems to manage its consequences do not yet exist.

AINews Verdict & Predictions

This is not a prediction of the future—it is a report on what is already happening. Anthropic has crossed the Rubicon. The era of software written by humans for humans is ending.

Prediction 1: By 2027, over 50% of new software features in major products will be written entirely by AI, with no human writing a single line of code. The economic incentive is too strong. Companies that adopt this will ship features 100x faster at 1/1000th the cost. Those that don't will be extinct.

Prediction 2: The first major software company to announce a "free forever" tier due to AI-driven cost reduction will do so within 18 months. It will be a database company or a cloud infrastructure provider, because those have the highest margins and the most to gain from volume.

Prediction 3: Software engineering as a profession will not disappear, but it will transform into a hybrid role: 20% traditional coding, 80% AI supervision, system design, and ethical oversight. The number of such roles will be 60-70% lower than today's engineering headcount.

Prediction 4: A major security incident caused by autonomously deployed AI code will occur within 12 months. It will be the "wake-up call" that forces the industry to develop new safety protocols for AI self-improvement.

What to watch next: Watch for Anthropic's next quarterly report. If they announce that Claude's self-improvement loop has been extended to include architectural decisions (e.g., choosing which model architecture to use), the singularity timeline just got a lot shorter. Also watch for the first lawsuit over IP ownership of AI-generated code—that will set the legal precedent for the entire industry.

The software free era is coming. The question is not if, but how we manage the transition. History suggests we will not manage it well, but we have no choice but to try.

Archive

May 20261839 published articles

Further Reading

AGI 已然來臨:下一個前沿是自我進化的 AI 系統一位知名 AI 研究員提出了一個挑釁性的論點,她認為人工通用智慧(AGI)並非未來的里程碑,而是當下的現實。她主張,真正的下一個前沿是讓 AGI 實現『自我進化』——自主改進其自身的架構與能力。AI資本超級循環:貝佐斯380億美元秘密創業公司、Anthropic的900億美元豪賭,以及中國GPU突破一份洩漏的募資文件顯示,傑夫·貝佐斯神秘的新AI公司在產品發布前估值已達380億美元。與此同時,Anthropic正尋求以900億美元估值募資300億美元,而阿里巴巴的平頭哥已實現自研GPU的量產。這三起事件標誌著AI資本超級循環的加速。自由能量原則:驅動生命、AI與AGI的隱藏演算法熱力學預言了不可避免的混沌,但生命與智慧卻持續創造秩序。AINews揭示自由能量原則——一種普遍的生存演算法——如何推動從被動預測到全像世界模型的典範轉移,透過因果推論解鎖通往AGI之路。馬爾他全民ChatGPT Plus方案、Google禁止AI污染搜尋結果、OpenAI收購Weights.gg並由Greg Brockman掌管所有產品:基礎設施時代來臨馬爾他成為首個為每位公民提供ChatGPT Plus訂閱的國家。Google宣戰,禁止AI污染搜尋結果。OpenAI收購Weights.gg,並讓Greg Brockman負責所有產品。三件事傳遞同一個訊號:AI不再是玩具,而是基礎設施。

常见问题

这次公司发布“AI Writes Its Own Code: Anthropic CEO Declares Software Free Era Begins”主要讲了什么?

In a bombshell interview, Anthropic CEO Dario Amodei revealed that Claude's latest capabilities were not written by human engineers but by the AI itself, with humans acting only as…

从“Anthropic Claude self-improvement technical details”看,这家公司的这次发布为什么值得关注?

The core of this breakthrough lies in a recursive self-improvement loop that Anthropic has quietly operationalized. Claude, based on a mixture-of-experts (MoE) transformer architecture with an estimated 1-2 trillion para…

围绕“software engineer job displacement AI 2025”,这次发布可能带来哪些后续影响?

后续通常要继续观察用户增长、产品渗透率、生态合作、竞品应对以及资本市场和开发者社区的反馈。