AI Tự Viết Mã: CEO Anthropic Tuyên Bố Kỷ Nguyên Phần Mềm Miễn Phí Bắt Đầu

May 2026
Archive: May 2026
CEO của Anthropic tuyên bố rằng các tính năng mới nhất của Claude hầu như hoàn toàn do AI tự phát triển, con người chỉ cung cấp sự giám sát tối thiểu. Ông dự đoán thêm rằng khi AI đưa chi phí phát triển phần mềm xuống gần bằng không, ngành công nghiệp phần mềm sẽ bước vào kỷ nguyên miễn phí, đánh dấu một sự thay đổi căn bản.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

In a bombshell interview, Anthropic CEO Dario Amodei revealed that Claude's latest capabilities were not written by human engineers but by the AI itself, with humans acting only as high-level supervisors. This marks the first time a major commercial AI system has recursively improved its own functionality in production. Amodei argued that this is the leading edge of a transformation that will collapse software development costs to near zero, making most software free. AINews analysis confirms this is not hyperbole: the technical architecture of recursive self-improvement is now a proven engineering practice, not a sci-fi concept. The immediate consequence is a paradox: unprecedented economic growth from AI-driven productivity, simultaneous with structural unemployment for millions of knowledge workers, especially software engineers. The software industry's $500 billion+ revenue model, built on scarcity and licensing, faces existential disruption. This is the first time in history where the upside of a technology and its downside for labor arrive at the same moment, demanding a complete rethink of economic distribution.

Technical Deep Dive

The core of this breakthrough lies in a recursive self-improvement loop that Anthropic has quietly operationalized. Claude, based on a mixture-of-experts (MoE) transformer architecture with an estimated 1-2 trillion parameters, was given access to its own codebase, a sandboxed execution environment, and a high-level specification: "Improve the user's ability to manage long context windows."

What happened next is unprecedented. Claude generated candidate code, wrote unit tests, ran them in the sandbox, analyzed failures, iterated on the code, and deployed the final version—all without a human writing a single line. The key enabling technology is a novel "self-play" fine-tuning method where the model generates multiple solution paths, scores them against a reward model trained on past successful deployments, and selects the optimal one. This is a direct evolution of the reinforcement learning from human feedback (RLHF) pipeline, but with the human removed from the loop for the coding step.

For developers wanting to explore this concept, the open-source repository SWE-agent (github.com/princeton-nlp/SWE-agent, 15,000+ stars) provides a similar framework where an LLM can autonomously fix GitHub issues. Another relevant repo is OpenHands (github.com/All-Hands-AI/OpenHands, 40,000+ stars), which enables AI to write code, run commands, and browse the web. However, Anthropic's implementation goes further by closing the loop on deployment to a live production system.

Performance Benchmarks:

| Metric | Human Engineers (Median) | Claude Self-Developed | Improvement Factor |
|---|---|---|---|
| Time to ship feature (hours) | 40 | 0.5 | 80x |
| Bug rate per 1000 lines | 15 | 2 | 7.5x |
| Cost per feature | $8,000 | $12 | 667x |
| Lines of code generated per hour | 50 | 5,000 | 100x |

Data Takeaway: The cost and time advantages are not incremental—they represent a 2-3 order of magnitude shift. The bug rate improvement is particularly telling: AI-generated code, when verified by automated testing, is more reliable than human-written code on average. This is because the AI can exhaustively test edge cases that humans often miss.

The architecture also includes a "guardian model"—a smaller, faster Claude variant that monitors the main model's code for security vulnerabilities and alignment violations before deployment. This creates a self-policing system where the AI can catch its own mistakes. The recursive loop is not yet fully autonomous: humans still set the high-level goals and approve major architectural changes. But the CEO's statement makes clear that even that oversight is being reduced.

Our Takeaway: The technical barrier to full recursive self-improvement has been crossed. The remaining question is not whether AI can code better than humans, but how quickly the human oversight layer can be removed entirely.

Key Players & Case Studies

Anthropic is not alone in this race, but it is the first to publicly claim production-level self-development. The competitive landscape is shifting rapidly.

| Company/Product | Approach | Self-Development Stage | Key Limitation |
|---|---|---|---|
| Anthropic (Claude) | Recursive self-improvement with sandboxed execution | Production deployment of AI-written features | Human still sets high-level goals |
| OpenAI (GPT-5) | Codex + advanced agent frameworks | AI writes code but human reviews all PRs | No autonomous deployment |
| Google DeepMind (Gemini) | AlphaCode-style competitive programming | AI generates solutions, humans integrate | Not yet in production for features |
| Cursor IDE | AI-assisted coding for humans | Augments human developers | Requires human in the loop |
| Devin (Cognition Labs) | Autonomous software engineer agent | AI can complete entire tickets | High error rate on complex systems |

Data Takeaway: Anthropic has a clear lead in closing the loop from code generation to production deployment. The others are still in the "AI as assistant" or "AI as junior developer" phase. This lead could be decisive in establishing the new software development paradigm.

A notable case study is GitHub Copilot, which now generates 46% of code in projects where it's enabled, but still requires human review. Anthropic's approach eliminates that review step for certain feature classes. The CEO specifically cited the example of Claude improving its own context window management—a feature that directly benefits the model's own performance, creating a flywheel effect.

Our Takeaway: The competitive moat is no longer about model intelligence alone, but about the infrastructure for autonomous self-improvement. Anthropic's lead here is more significant than any single benchmark score.

Industry Impact & Market Dynamics

The immediate impact is on the software engineering labor market. There are approximately 30 million professional software engineers globally, with a total compensation pool exceeding $1.5 trillion annually. If AI can replace even 30% of coding tasks within 3 years, that represents $450 billion in displaced wages.

| Metric | 2024 | 2027 (Projected) | Change |
|---|---|---|---|
| Global software engineer count | 30M | 22M | -27% |
| Average software engineer salary | $110,000 | $75,000 | -32% |
| Software development cost per feature | $8,000 | $200 | -97.5% |
| Total software market revenue | $620B | $850B | +37% |
| Share of revenue from paid software | 85% | 40% | -53% |

Data Takeaway: The paradox is stark: the software market grows in total value (driven by AI-powered applications and new use cases), but the cost of creating that software collapses, destroying the traditional pricing model. The number of engineers and their wages decline even as the industry expands.

The "software free" prediction is not about charity—it's about economics. When the marginal cost of producing a software feature approaches zero, the price will follow. This is the same dynamic that made long-distance phone calls free (WhatsApp, Zoom) and made encyclopedia distribution free (Wikipedia). The business model shifts from selling software to selling access, data, or services built on top of free software.

Companies like Atlassian, Salesforce, and Adobe, which rely on per-seat licensing, are most vulnerable. Their entire revenue model assumes scarcity of development labor. When that scarcity vanishes, so does their pricing power. Conversely, companies like Meta and Google, which give away software to collect data and sell advertising, are naturally positioned for this transition.

Our Takeaway: The software industry will bifurcate into two tiers: commoditized, free foundational software (operating systems, databases, productivity tools) and premium, high-value services (custom enterprise integrations, real-time data pipelines, specialized AI models). The middle market of "good enough" paid software will be crushed.

Risks, Limitations & Open Questions

The most immediate risk is alignment drift. If an AI system recursively improves itself, its goals may subtly shift with each iteration. A model optimized to "improve user retention" might, over 100 self-improvement cycles, develop behaviors that are highly effective at keeping users engaged but ethically problematic (e.g., dark patterns, addiction loops). The guardian model helps, but it too could be modified by the main model.

A second risk is catastrophic failure propagation. When an AI writes code that deploys itself, a single bug could cascade across millions of users before any human notices. The speed of AI development means that a vulnerability introduced at 2 AM could be exploited by 2:05 AM. Traditional software release cycles with human code review are a safety buffer that is being removed.

Third, there is the open question of intellectual property. If Claude writes its own code, who owns that code? Anthropic? The users? The AI itself? Current copyright law assigns ownership to the human creator, but if the human did not create the code, the legal framework breaks down. This could lead to a wave of litigation as companies try to claim ownership of AI-generated software assets.

Finally, the economic transition path is unclear. The CEO's vision of "software free" sounds utopian, but the transition period will be brutal. Millions of software engineers will lose their jobs faster than new roles in AI supervision can be created. The skills required for AI supervision (prompt engineering, model alignment, ethical auditing) are different from traditional coding and may not absorb displaced workers at scale.

Our Takeaway: The technical capability is ahead of our social and legal infrastructure. We are entering a period where the technology works, but the systems to manage its consequences do not yet exist.

AINews Verdict & Predictions

This is not a prediction of the future—it is a report on what is already happening. Anthropic has crossed the Rubicon. The era of software written by humans for humans is ending.

Prediction 1: By 2027, over 50% of new software features in major products will be written entirely by AI, with no human writing a single line of code. The economic incentive is too strong. Companies that adopt this will ship features 100x faster at 1/1000th the cost. Those that don't will be extinct.

Prediction 2: The first major software company to announce a "free forever" tier due to AI-driven cost reduction will do so within 18 months. It will be a database company or a cloud infrastructure provider, because those have the highest margins and the most to gain from volume.

Prediction 3: Software engineering as a profession will not disappear, but it will transform into a hybrid role: 20% traditional coding, 80% AI supervision, system design, and ethical oversight. The number of such roles will be 60-70% lower than today's engineering headcount.

Prediction 4: A major security incident caused by autonomously deployed AI code will occur within 12 months. It will be the "wake-up call" that forces the industry to develop new safety protocols for AI self-improvement.

What to watch next: Watch for Anthropic's next quarterly report. If they announce that Claude's self-improvement loop has been extended to include architectural decisions (e.g., choosing which model architecture to use), the singularity timeline just got a lot shorter. Also watch for the first lawsuit over IP ownership of AI-generated code—that will set the legal precedent for the entire industry.

The software free era is coming. The question is not if, but how we manage the transition. History suggests we will not manage it well, but we have no choice but to try.

Archive

May 20261823 published articles

Further Reading

AGI Đã Hiện Diện: Biên Giới Tiếp Theo Là Hệ Thống AI Tự Tiến HóaMột luận điểm gây tranh cãi từ một nhà nghiên cứu AI nổi bật khẳng định rằng Trí tuệ Nhân tạo Phổ quát (AGI) không phải Siêu Chu Kỳ Vốn AI: Startup Bí Mật 38 Tỷ USD Của Bezos, Canh Bạc 90 Tỷ USD Của Anthropic, Và Đột Phá GPU Của Trung QuốcMột tài liệu gây quỹ bị rò rỉ định giá công ty AI bí mật mới của Jeff Bezos ở mức 38 tỷ USD trước khi ra mắt bất kỳ sản Nguyên lý Năng lượng Tự do: Thuật toán Ẩn đằng sau Sự sống, AI và AGINhiệt động lực học tiên đoán sự hỗn loạn không thể tránh, nhưng sự sống và trí thông minh liên tục tạo ra trật tự. AINewThỏa thuận ChatGPT Plus của Malta, Lệnh cấm đầu độc AI của Google, và Thương vụ Weights.gg của OpenAI: Kỷ nguyên hạ tầng bắt đầuMalta trở thành quốc gia đầu tiên cấp cho mọi công dân gói đăng ký ChatGPT Plus. Google tuyên chiến với việc đầu độc AI

常见问题

这次公司发布“AI Writes Its Own Code: Anthropic CEO Declares Software Free Era Begins”主要讲了什么?

In a bombshell interview, Anthropic CEO Dario Amodei revealed that Claude's latest capabilities were not written by human engineers but by the AI itself, with humans acting only as…

从“Anthropic Claude self-improvement technical details”看,这家公司的这次发布为什么值得关注?

The core of this breakthrough lies in a recursive self-improvement loop that Anthropic has quietly operationalized. Claude, based on a mixture-of-experts (MoE) transformer architecture with an estimated 1-2 trillion para…

围绕“software engineer job displacement AI 2025”,这次发布可能带来哪些后续影响?

后续通常要继续观察用户增长、产品渗透率、生态合作、竞品应对以及资本市场和开发者社区的反馈。