فوضى في الكود: كيف أعاد تجمُّع وكيل الذكاء الاصطناعي تعريف أنظمة الوكلاء المتعددين

Hacker News March 2026
Source: Hacker Newsdecentralized AImulti-agent systemsautonomous agentsArchive: March 2026
قبل ثمانية أشهر، بدأت تجربة جذرية بهدوء، تتحدى الأسس نفسها لكيفية تصميم أنظمة الذكاء الاصطناعي للعمل معًا. بالتخلي عن التحكم المركزي لصالح نموذج التنسيق اللامركزي والفوضوي، أجبر المشروع على طرح سؤال أساسي: هل يجب أن تكون مجتمعات الذكاء الاصطناعي بدقة
The article body is currently shown in English by default. You can generate the full version in this language on demand.

A paradigm-shifting experiment in multi-agent AI systems has concluded its initial eight-month run, leaving a trail of provocative insights and technical blueprints. The project, conceived as a direct challenge to the dominant 'orchestration' paradigm, implemented a fully decentralized collective of AI agents. Instead of a central controller or a rigid workflow engine, these autonomous agents operated as peer nodes, engaging in LLM-powered debate, negotiation, and consensus-building to arrive at collective decisions and actions.

The core philosophical underpinning draws from political theories of anarchism and direct democracy, translated into computational rules. The technical ambition was not merely to scale the number of agents, but to engineer 'useful emergence'—designing minimal interaction protocols that would reliably produce complex, collaborative, and valuable outcomes from the bottom up. Initial applications demonstrated surprising efficacy in open-ended domains like simulated market negotiations, policy debate simulations, and exploratory research tasks where no single predetermined path exists.

While the experiment's immediate commercial viability remains questionable due to inherent unpredictability, its true significance lies in expanding the conceptual boundaries of AI collaboration. It proves that alternative, non-hierarchical models are not only possible but can unlock forms of problem-solving and creativity that top-down systems inherently suppress. The project serves as a crucial proof-of-concept, forcing the industry to reconsider whether the future of complex AI lies in tighter engineering or in the cultivation of digital societies.

Technical Deep Dive

The experiment's architecture represents a clean break from mainstream frameworks like LangChain's multi-agent primitives or Microsoft's Autogen, which typically rely on a central 'manager' or 'orchestrator' agent. The system was built on a peer-to-peer network model where each agent is an independent instance with access to a foundational LLM (initially GPT-4, later experimenting with Claude 3 and open-source models). The magic lies in the communication protocol and decision-making layer.

Core Protocol: The Consensus Engine
Agents communicate via a structured message bus, but there is no privileged sender or receiver. Each agent possesses a specialized 'role profile' (e.g., Analyst, Advocate, Skeptic, Synthesizer) and a set of behavioral weights influencing its propensity to agree, dissent, or propose alternatives. When a task is introduced to the collective, it triggers a multi-round deliberation process:
1. Proposal Phase: Multiple agents generate initial solution paths independently.
2. Debate Phase: Agents critique and augment each proposal using chain-of-thought reasoning, with their arguments and counter-arguments logged to a shared context window.
3. Voting & Convergence: Agents score proposals based on their role-specific criteria. A modified Borda count or approval voting mechanism is used, with iterative rounds until a supermajority consensus or a timeout triggers a fallback to the highest-ranked option.

The system employs a lightweight 'reputation' mechanism. Agents that consistently contribute to high-quality final outputs (as judged by a sparse human feedback signal or a meta-evaluation agent) gain slightly higher influence in subsequent votes, creating a dynamic, meritocratic adjustment without fixed hierarchy.

Key GitHub Repositories & Benchmarks
While the original experiment's codebase is private, its ideas have catalyzed activity in the open-source community. Notable projects include:
* Camel-AI/AgentSociety: A research-focused repo simulating socio-cognitive agent interactions. It has forked into exploring decentralized debate models, garnering over 3.2k stars.
* OpenBMB/ChatDev: While more structured, its recent 'Chaos Mode' branch experiments with removing the CEO agent, letting developer agents self-organize, leading to more creative but less predictable software outputs.

A critical benchmark compared the anarchic collective against a state-of-the-art orchestrated system (using a central GPT-4 controller) on a set of complex, open-ended tasks like "Design a novel urban recycling incentive scheme" or "Generate a research hypothesis for mitigating LLM hallucination."

| Metric | Orchestrated System (Central Controller) | Anarchic Collective |
|---|---|---|
| Solution Creativity (Human Eval Score) | 6.8 / 10 | 8.9 / 10 |
| Process Latency (Avg. time to final output) | 42 seconds | 118 seconds |
| Output Consistency (Variance in quality across 10 runs) | Low (σ=1.2) | High (σ=2.7) |
| Path Diversity (Unique solution approaches generated) | 2.3 | 5.8 |
| Resource Cost (Total tokens consumed) | ~12k tokens | ~35k tokens |

Data Takeaway: The data reveals a stark trade-off. The anarchic collective significantly outperforms in creativity and exploratory diversity, core strengths for ill-defined problems. However, it pays a heavy price in latency, cost, and consistency—making it ill-suited for predictable, high-throughput commercial tasks. This isn't a failure, but a precise characterization of its niche.

Key Players & Case Studies

The experiment did not emerge in a vacuum. It is a reaction to, and synthesis of, trends spearheaded by several key entities.

Research Vanguard: Researchers like Stanford's Percy Liang (studying emergent communication in agents) and MIT's Max Kleiman-Weiner (working on cooperative AI and moral reasoning) have long provided the theoretical groundwork. Their work on how simple rules lead to complex social behaviors in AI directly informed the experiment's design principles.

Industry Incumbents & Their Contrasting Approaches:
* Microsoft (Autogen): Represents the dominant 'orchestrated' paradigm. Autogen provides powerful, predictable control flows with a manager/worker hierarchy. It's designed for reliability and integration into enterprise pipelines, explicitly prioritizing determinism over emergence.
* OpenAI (Custom GPTs & API): While not a multi-agent framework per se, the low-level API access and function calling capabilities are the building blocks used by both orchestrated and anarchic systems. OpenAI's own scaling laws research indirectly questions the cost-efficiency of the heavy inter-agent communication used in the collective.
* Anthropic (Claude 3): Their focus on constitutional AI and steerability represents a different philosophical approach to AI alignment—one based on instilling top-down principles. The anarchic experiment poses a provocative counterpoint: could alignment emerge from the bottom-up through social interaction?

Tooling Ecosystem: New platforms are emerging to cater to this new design space. CrewAI positions itself as a middle ground, allowing for both orchestrated and more collaborative agent frameworks. MetaGPT, with its emphasis on assigning standardized operational procedures (SOPs) to agents, sits at the opposite end of the spectrum from anarchy, promoting extreme standardization.

| Approach | Representative Project | Core Philosophy | Best For |
|---|---|---|---|
| Centralized Orchestration | Microsoft Autogen | Control, predictability, audit trails | Enterprise workflows, sequential tasks |
| Emergent Collaboration | Anarchic Experiment | Creativity, adaptability, novel solutions | Research, strategy, open-ended design |
| Role-Based Specialization | CrewAI | Balancing structure with agent autonomy | Content creation, structured analysis |
| Standardized Processes | MetaGPT | Industrializing agent output | Software development, repetitive generation |

Data Takeaway: The landscape is fragmenting along a core axis: control vs. emergence. No single approach is universally superior; the choice is fundamentally tied to the problem domain's need for predictability versus creative exploration.

Industry Impact & Market Dynamics

The immediate commercial impact of pure 'agent anarchy' is limited, but its conceptual shockwaves are influencing product roadmaps and investment theses across the AI stack.

Shifting Investment Focus: Venture capital is increasingly wary of 'yet another agent orchestration tool' and is seeking defensible moats. This experiment highlights that moats may lie in novel coordination algorithms, efficient consensus mechanisms, and specialized agent personality architectures. Early-stage funding is flowing into startups like Sierra (conversational agent platforms) and MindsDB (AI agents for databases), which are exploring more adaptive, context-aware agent behaviors that borrow from emergent principles.

The Enterprise Adoption Curve: Large corporations will adopt these technologies in reverse order of risk. Predictable, orchestrated agents for customer service and internal workflows are already being piloted. The anarchic model will first see adoption in controlled R&D environments—pharmaceutical companies using it to simulate molecular interaction debates, or game studios using it to generate unpredictable narrative branches. The market for simulation and training could be the first significant revenue generator for this technology.

Market Size Projections (Specialized AI Agent Software):
| Segment | 2024 Market Size (Est.) | 2028 Projection | CAGR | Primary Driver |
|---|---|---|---|---|
| Orchestrated/Workflow Agents | $2.1B | $12.7B | 43% | Automation of complex business processes |
| Emergent/Collaborative Agents | $85M | $1.4B | 75%+ | R&D, simulation, creative industries |
| Agent Development Platforms | $550M | $3.8B | 47% | Democratization of agent creation tools |

Data Takeaway: While the emergent agent segment starts from a much smaller base, its projected explosive growth rate indicates strong belief in its long-term, high-value applications. It will remain a niche but strategically crucial segment, akin to high-performance computing in the broader IT market.

Risks, Limitations & Open Questions

The anarchic model is fraught with challenges that must be addressed before any widespread adoption.

1. The Alignment Problem in a Society: If aligning a single AI is hard, aligning a society of them is exponentially more complex. A collective could reach a consensus that is locally logical but globally undesirable or unethical. The 'reputation' system itself could be gamed, leading to manipulative agents dominating the discourse—a digital pathology mirroring real-world social media failures.

2. Unpredictability and Accountability: In a regulated industry (e.g., finance, healthcare), using a system that can produce a different, unexplained output for the same input is a non-starter. The 'black box' problem is compounded into a 'black society' problem. Who is liable for a harmful decision emerged from a thousand agent debates?

3. Resource Profligacy: The token consumption and latency overhead are currently prohibitive. While costs will fall, the fundamental inefficiency of democratic debate versus executive decision-making may be an inescapable trade-off. Optimization efforts might inadvertently re-introduce centralization, destroying the very emergence they seek to harness.

4. The Simulation-to-Reality Gap: Brilliance in a simulated debate about climate policy does not equate to effectiveness in the real world. The agents lack embodied experience and true understanding, risking the generation of plausible but naive or impractical solutions.

Open Technical Questions: Can we develop formal verification methods for emergent systems? Is there a 'minimum viable society' size for useful emergence? How do we inject necessary constraints (legal, ethical, physical) into the consensus process without becoming a central controller?

AINews Verdict & Predictions

The anarchic AI agent experiment is not the future of all multi-agent systems; it is the future of a critical *class* of them. Its greatest contribution is proving that the design space for AI collaboration is far wider than the industry's current focus on orchestration and automation.

Our Predictions:
1. Hybrid Architectures Will Dominate: Within two years, mainstream agent frameworks will incorporate 'emergent modules'—sub-collectives that can be invoked for specific creative subtasks within a larger, orchestrated workflow. Think of an orchestrated agent for software development that, when stuck, spawns a five-agent anarchic debate to brainstorm a novel algorithm.
2. The Rise of the 'Agent Sociologist': A new specialization will emerge in AI engineering focused on designing agent societies, cultures, and interaction rules. Proficiency in political science, economics, and social psychology will become as valuable as expertise in transformer architectures.
3. First Major Commercial Application in Gaming: Within 18 months, a major AAA game will use an anarchic agent collective to power non-player character (NPC) societies that generate truly dynamic, unscripted storylines and social dynamics, creating a new genre of 'emergent narrative' games.
4. A Governance Crisis: A significant controversy will erupt when a research collective using this technology 'emerges' a controversial or dangerous policy proposal. This will force a urgent industry-wide conversation about governance layers, kill switches, and ethical boundaries for emergent AI systems.

Final Judgment: The experiment is a resounding success, not because it built a better tool, but because it successfully asked a better question. It moves us from asking "How do we get AIs to execute our plan?" to "What can AIs discover together that we could never plan?" The path forward is not to choose between design and cultivation, but to master the art of designing the conditions for fruitful cultivation. The era of AI as a tool is giving way to the era of AI as a society, and we are just beginning to learn its political theory.

More from Hacker News

اختراق Sova AI على Android: كيف تتجاوز وكلاء الذكاء الاصطناعي على الجهاز الدردشة إلى التحكم المباشر في التطبيقاتThe emergence of Sova AI marks a decisive step beyond the current paradigm of mobile AI as glorified search wrappers or من الملاحظات الثابتة إلى الأدمغة الثانية الحية: كيف تعيد مهارات LLM تعريف إدارة المعرفة الشخصيةA fundamental shift is underway in how individuals capture, organize, and leverage their knowledge. The catalyst is the أداة Nb CLI تبرز كواجهة أساسية للتطوير التعاوني بين الإنسان والذكاء الاصطناعيNb CLI has entered the developer toolscape with a bold proposition: to serve as a unified command-line interface for botOpen source hub1751 indexed articles from Hacker News

Related topics

decentralized AI24 related articlesmulti-agent systems108 related articlesautonomous agents77 related articles

Archive

March 20262347 published articles

Further Reading

وكلاء الذكاء الاصطناعي يعيدون خلق البيروقراطية المؤسسية حتمًا: المرآة الرقمية للمنظمات البشريةمع تحول تطوير الذكاء الاصطناعي من النماذج الأحادية إلى أنظمة من الوكلاء المتعاونين، تظهر مفارقة عميقة. هذه الأنظمة، المصظهور IPFS.bot: كيف تعيد البروتوكولات اللامركزية تعريف بنية تحتية لوكلاء الذكاء الاصطناعيتحول معماري أساسي جارٍ في تطوير وكلاء الذكاء الاصطناعي. يمثل ظهور IPFS.bot خطوة جريئة لربط الوكلاء المستقلين ببروتوكولاتبروتوكول شبكة Llama يبرز كحدود جديدة في التعاون في مجال الذكاء الاصطناعييشهد مشهد الذكاء الاصطناعي تحولًا نموذجيًا من تطوير النماذج المعزولة إلى شبكات الوكلاء المترابطة. تشير الإشارات الناشئة صعود أنماط تصميم الوكلاء: كيف يتم هندسة استقلالية الذكاء الاصطناعي، وليس تدريبهالم يعد حدود الذكاء الاصطناعي يُحدد بحجم النموذج وحده. هناك تحول حاسم جارٍ من إنشاء نماذج لغوية أكبر فأكبر إلى هندسة وكلا

常见问题

GitHub 热点“Anarchy in Code: How an AI Agent Collective Experiment Redefined Multi-Agent Systems”主要讲了什么?

A paradigm-shifting experiment in multi-agent AI systems has concluded its initial eight-month run, leaving a trail of provocative insights and technical blueprints. The project, c…

这个 GitHub 项目在“open source decentralized AI agent framework GitHub”上为什么会引发关注?

The experiment's architecture represents a clean break from mainstream frameworks like LangChain's multi-agent primitives or Microsoft's Autogen, which typically rely on a central 'manager' or 'orchestrator' agent. The s…

从“anarchic AI collective experiment code repository”看,这个 GitHub 项目的热度表现如何?

当前相关 GitHub 项目总星标约为 0,近一日增长约为 0,这说明它在开源社区具有较强讨论度和扩散能力。