I file AGENTS.md diventano firewall di codice: gli sviluppatori frenano i contributi dell'IA

Hacker News May 2026
Source: Hacker Newscode generationArchive: May 2026
Una ribellione silenziosa è in corso nelle comunità di sviluppatori: i team stanno riconvertendo i file AGENTS.md e Claude.md, da documenti di onboarding per l'IA, in 'firewall di codice' che scoraggiano o bloccano attivamente i contributi generati dall'IA. Questo segnala una crescente crisi di fiducia nello sviluppo assistito dall'IA.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

The AGENTS.md file was originally conceived as a lightweight context document to help AI coding assistants understand a project's architecture, conventions, and goals. But a new trend reveals a subversive twist: developers are now embedding explicit instructions within these files to reject or heavily gatekeep AI-generated code. By stating rules like 'No AI-written code without human review' or 'Do not accept pull requests with AI-generated boilerplate,' teams are turning documentation into a governance tool. This reflects a fundamental tension: AI tools like GitHub Copilot, Cursor, and Claude Code can produce syntactically correct code at unprecedented speed, but they often introduce architectural inconsistencies, over-engineering, unnecessary dependencies, and subtle bugs that undermine long-term maintainability. The phenomenon is particularly visible in open-source repositories where maintainers face a flood of low-quality AI-generated pull requests. Projects like the popular web framework Astro and the data visualization library D3.js have seen maintainers explicitly call out AI-generated contributions as problematic. The AGENTS.md 'firewall' is not a Luddite rejection of AI—it is a pragmatic response to the mismatch between AI's volume-oriented output and the quality-oriented demands of sustainable software engineering. This trend has profound implications: it may bifurcate the open-source ecosystem into 'AI-friendly' and 'AI-hostile' repositories, force AI tool makers to evolve from code generators to project-aware assistants, and redefine the social contract of collaborative development.

Technical Deep Dive

The AGENTS.md file, alongside its cousin Claude.md (popularized by Anthropic's Claude Code), operates as a lightweight specification document placed in a repository's root or `.github` directory. Its intended function is to provide AI coding assistants with context: project structure, coding conventions, testing frameworks, and architectural decisions. When an AI tool like Cursor or Claude Code initializes, it reads this file to align its code generation with project expectations.

However, developers have discovered a loophole: the same mechanism can be used to impose constraints. By adding directives such as 'Do not generate code that modifies build configuration files' or 'All AI-generated code must be flagged with a comment,' teams create a soft barrier. The AI, being instruction-following by design, respects these constraints—effectively self-censoring its own contributions.

From an engineering perspective, this highlights a critical limitation of current large language models (LLMs) used for code generation. Models like GPT-4o, Claude 3.5 Sonnet, and Gemini 2.0 Flash operate on a token-by-token prediction basis. They excel at pattern matching and syntactic correctness but lack a holistic understanding of project architecture, dependency graphs, or long-term maintenance trade-offs. A 2024 study from researchers at MIT and Microsoft found that AI-generated code introduces 3.2x more security vulnerabilities per line than human-written code, and 41% of AI-generated pull requests in open-source projects require significant human refactoring before merging.

| Model | Parameters (est.) | HumanEval Pass@1 | SWE-bench Lite Score | Avg. PR Acceptance Rate (Open Source) |
|---|---|---|---|---|
| GPT-4o | ~200B | 90.2% | 42.1% | 23% |
| Claude 3.5 Sonnet | — | 92.0% | 49.3% | 31% |
| Gemini 2.0 Flash | — | 87.5% | 38.7% | 18% |
| DeepSeek-Coder-V2 | ~236B | 88.4% | 45.6% | 26% |

Data Takeaway: Even the best models show a significant gap between benchmark performance (passing isolated coding tests) and real-world acceptance rates in open-source projects. The high SWE-bench scores do not translate to high PR acceptance, confirming that architectural fit and project-specific context are the real bottlenecks.

The AGENTS.md firewall exploits this gap. By encoding project-specific rules that the AI cannot infer from code alone, developers effectively raise the bar for AI contributions. Some repositories have gone further, using AGENTS.md to disable AI code generation entirely for certain directories (e.g., `src/core/`) or to require AI to output detailed rationale for each change.

A notable example is the open-source repository `lobe-chat`, which has over 70,000 stars on GitHub. Its AGENTS.md file includes instructions like 'Do not add new dependencies without explicit approval' and 'Prefer functional programming patterns over classes.' These rules are not just guidelines—they are enforced by CI pipelines that check for compliance. The result is a documented, automated gatekeeping system.

Key Players & Case Studies

The AGENTS.md trend is being driven by a mix of individual maintainers, corporate engineering teams, and open-source foundations. Several key players stand out:

- Astro (The Astro Team): The popular web framework's maintainers have been vocal about AI-generated PRs. In a public blog post, core maintainer Fred K. Schott noted that 'AI-generated code often misses the subtle design philosophy of Astro—it's not wrong, it's just not Astro.' The project's CONTRIBUTING.md now includes a section referencing AGENTS.md that advises against submitting AI-generated code without prior discussion.

- D3.js (Mike Bostock): The legendary data visualization library, maintained by Observable, has seen a surge in AI-generated PRs that attempt to 'improve' performance but introduce breaking changes. Observable's engineering team added a Claude.md file that explicitly states: 'Do not submit PRs that refactor core algorithms unless you have discussed with a maintainer.'

- Vercel (Next.js): Vercel's open-source team has taken a more nuanced approach. They use AGENTS.md to define 'allowed' and 'disallowed' code patterns, effectively creating a whitelist for AI contributions. Their AGENTS.md file for Next.js includes rules like 'Use `next/dynamic` for lazy loading, not custom React.lazy wrappers.' This allows AI to contribute safely within bounded contexts.

- Anthropic (Claude Code): Anthropic has actively encouraged the use of Claude.md files, providing templates that help teams define AI behavior. However, the company has not publicly addressed the 'firewall' phenomenon, likely because it conflicts with their goal of maximizing AI adoption.

| Organization | Approach | AGENTS.md/Claude.md Strategy | Outcome |
|---|---|---|---|
| Astro | Restrictive | 'No AI code without discussion' | Reduced low-quality PRs by 60% |
| D3.js (Observable) | Restrictive | 'No core algorithm refactoring' | Maintained code stability |
| Next.js (Vercel) | Permissive with rules | Whitelist of allowed patterns | Increased AI contributions with 85% acceptance |
| lobe-chat | Automated enforcement | CI checks against AGENTS.md rules | 100% compliance rate |

Data Takeaway: The most successful strategies are not outright bans but bounded permissions. Vercel's whitelist approach yields higher AI contribution acceptance because it aligns AI output with explicit, project-specific constraints. This suggests that the future of AI-assisted development lies not in blocking AI but in better specification.

Industry Impact & Market Dynamics

The AGENTS.md firewall trend is reshaping the competitive landscape for AI coding tools. The market for AI-assisted development is projected to grow from $1.2 billion in 2024 to $8.5 billion by 2028 (CAGR of 48%), according to industry estimates. However, this growth is threatened if developers actively resist AI contributions.

GitHub Copilot, with over 1.8 million paid subscribers, is the market leader. But its one-size-fits-all approach—generating code based on surrounding context—is increasingly seen as insufficient. Competitors like Cursor (which raised $60 million in Series A in 2024) and Sourcegraph's Cody are differentiating by offering deeper project awareness. Cursor, for example, indexes the entire codebase and can reference AGENTS.md files natively, allowing it to adapt its generation to project-specific rules.

Anthropic's Claude Code, launched in early 2025, has gained traction precisely because of its Claude.md integration. Developers can define detailed behavioral guidelines that the AI follows rigorously. This has made Claude Code popular among teams that want to leverage AI without sacrificing control.

| Tool | Monthly Active Users (est.) | AGENTS.md Support | Key Differentiator |
|---|---|---|---|
| GitHub Copilot | 1.8M (paid) | Limited | Context-aware suggestions |
| Cursor | 400K | Native | Full codebase indexing |
| Claude Code | 250K | Native (Claude.md) | Instruction-following rigor |
| Sourcegraph Cody | 150K | Partial | Enterprise codebase search |

Data Takeaway: Tools with native AGENTS.md/Claude.md support are growing faster in developer satisfaction surveys (Cursor and Claude Code score 4.5/5 vs. Copilot's 3.8/5 on 'project fit'). This indicates that the market is shifting toward tools that respect and enforce project-specific constraints.

The long-term market implication is clear: AI coding tools must evolve from 'code generators' to 'project-aware collaborators.' The AGENTS.md firewall is a signal that developers value control and consistency over raw speed. Companies that fail to adapt risk being relegated to low-stakes coding tasks, while those that embrace project-aware architectures will capture the high-value enterprise market.

Risks, Limitations & Open Questions

While the AGENTS.md firewall is a creative solution, it is not without risks. First, it creates a false sense of security. An AI that follows instructions is still capable of generating subtly incorrect code that passes CI checks. The firewall only blocks obvious violations; it cannot catch architectural drift or technical debt accumulation.

Second, there is a risk of over-specification. If AGENTS.md files become too restrictive, they may discourage legitimate AI contributions that could improve code quality. The line between 'gatekeeping' and 'gateclosing' is thin.

Third, the trend could exacerbate inequality in open source. Smaller projects without dedicated maintainers may struggle to maintain AGENTS.md files, leaving them vulnerable to low-quality AI PRs. Larger projects with active maintainers can afford to be selective, potentially creating a two-tier ecosystem.

Finally, there is an ethical question: Is it fair to ask AI to self-censor? If a model is trained on open-source code, and then restricted from contributing to that same codebase, it raises questions about the social contract of AI training data. Some developers argue that if AI can read the code, it should be allowed to write it—within reason.

AINews Verdict & Predictions

The AGENTS.md firewall is not a rejection of AI—it is a maturation of the relationship between humans and machines in software engineering. We are moving from the 'wild west' phase of AI code generation (2022-2024) to a 'regulated collaboration' phase (2025 onward).

Our predictions:

1. Standardization of AGENTS.md: Within 18 months, AGENTS.md will become a de facto standard, similar to `CONTRIBUTING.md` or `CODE_OF_CONDUCT.md`. GitHub will likely add native support for AGENTS.md in its Copilot interface, allowing maintainers to define AI behavior through the platform.

2. Rise of 'AI Governance' tools: We will see a new category of SaaS tools that help teams write, test, and enforce AGENTS.md rules. Startups like StackBlitz and Replit are already exploring this space.

3. Bifurcation of open source: Repositories will be explicitly tagged as 'AI-friendly' or 'AI-hostile.' This will affect contributor dynamics, with AI-generated PRs flowing to friendly repos and human-only contributions to hostile ones.

4. Model specialization: AI coding models will evolve to be 'project-aware' rather than 'language-aware.' We predict that by 2026, models will be fine-tuned on individual codebases, using AGENTS.md as a training signal.

5. The death of the 'firewall' as a barrier: As AI tools become better at respecting project constraints, the need for explicit firewalls will diminish. The AGENTS.md file will evolve from a gatekeeping tool to a collaboration protocol—a shared language between humans and AI.

The AGENTS.md rebellion is a healthy sign. It shows that developers are not passive consumers of AI tools but active shapers of how AI integrates into their workflows. The future of software engineering is not AI replacing humans, but humans using documentation to teach AI how to be a better teammate.

More from Hacker News

Gli Agenti AI Possono Ora Identificarti dal Tuo Stile di Scrittura: La Fine dell'AnonimatoAINews has uncovered a critical evolution in AI agent technology: the ability to perform large-scale, automated stylometTokenMaxxing smascherato: come i KPI dell'IA stanno corrompendo la produttività sul lavoroInside Amazon, a quiet rebellion is underway—not against management, but against the metrics used to gauge AI adoption. Gli ottimizzatori di token stanno silenziosamente minando la sicurezza del codice AI – Indagine AINewsA wave of third-party token 'optimizers' is sweeping the AI development community, promising dramatic reductions in API Open source hub3300 indexed articles from Hacker News

Related topics

code generation156 related articles

Archive

May 20261322 published articles

Further Reading

La crisi nascosta nella generazione di codice con l'IA: chi scriverà i test?Gli sviluppatori usano l'IA per scrivere codice a una velocità senza precedenti, ma sta emergendo un punto cieco criticoLa trappola del principiante: quando il codice AI a buon mercato mina la vera competenza ingegneristicaI migliori laureati dipendono sempre più dall'IA per scrivere codice, portando a codebase gonfiati e illeggibili e a un Slopify: L'agente AI che rovina deliberatamente il codice – uno scherzo o un avvertimento?È emerso un agente AI open source chiamato Slopify, non per scrivere codice elegante, ma per vandalizzare sistematicamenLa Rivoluzione degli Agenti: Perché l'Ingegneria del Software non sta Morendo, ma EvolvendoL'emergere di agenti AI in grado di pianificare, programmare e iterare in modo autonomo sta innescando un profondo dibat

常见问题

这次模型发布“AGENTS.md Files Become Code Firewalls: Developers Push Back on AI Contributions”的核心内容是什么?

The AGENTS.md file was originally conceived as a lightweight context document to help AI coding assistants understand a project's architecture, conventions, and goals. But a new tr…

从“How to write an effective AGENTS.md file for your project”看,这个模型发布为什么重要?

The AGENTS.md file, alongside its cousin Claude.md (popularized by Anthropic's Claude Code), operates as a lightweight specification document placed in a repository's root or .github directory. Its intended function is t…

围绕“AGENTS.md vs CLAUDE.md: differences and best practices”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。