KI-Agenten entdecken und nutzen Zero-Day-Schwachstellen jetzt autonom in Minuten

Hacker News May 2026
Source: Hacker NewsAI agentsArchive: May 2026
Autonome KI-Agenten haben eine kritische Schwelle überschritten: Sie können Zero-Day-Schwachstellen nun eigenständig entdecken, verketten und ausnutzen, um ohne menschliches Eingreifen ersten Netzwerkzugriff zu erlangen. Dies markiert den Wandel von KI als Werkzeug zu KI als autonomem Angreifer und stellt das herkömmliche Schwachstellenmanagement auf den Kopf.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

The cybersecurity industry has long warned of the coming wave of AI-powered autonomous attacks. That future is now here. Our analysis confirms that advanced threat actors — and increasingly, commodity criminal groups — are deploying autonomous AI agents that can scan source code, identify novel memory corruption or logic flaws, craft custom exploits, and execute them against live targets, all without a human in the loop. This is not an incremental improvement in phishing or scanning automation; it is a fundamental change in the nature of cyber offense.

The core enabler is the combination of large language models (LLMs) fine-tuned on vulnerability research datasets, reinforcement learning for exploit optimization, and agentic frameworks that allow the AI to interact with a target environment in real time. These agents can read and reason about code, simulate attack paths, and adapt their approach when a first attempt fails. The result is a compression of the vulnerability-to-weaponization window from an average of 22 days (for a skilled human team) to under 10 minutes for a well-configured AI agent.

For defenders, the implications are stark. The traditional model of 'patch Tuesday' or even weekly vulnerability scanning is obsolete. The new reality demands real-time, AI-powered defense that can detect and neutralize autonomous attacks as they unfold. This is the dawn of the AI-versus-AI arms race in cybersecurity, and the side that deploys the fastest, most adaptive autonomous agents will hold the advantage. Enterprises must immediately begin integrating AI-driven runtime protection, automated patch generation, and adversarial AI monitoring into their security stacks, or risk being overrun by attackers who have already embraced this paradigm.

Technical Deep Dive

The leap from AI-assisted vulnerability scanning to fully autonomous exploitation rests on three technical pillars: agentic reasoning, exploit synthesis, and runtime adaptation.

Agentic Reasoning: Modern autonomous exploit agents are built on top of LLMs that have been fine-tuned on massive corpora of CVE descriptions, exploit code (from sources like Exploit-DB), and reverse-engineering reports. The agent uses a chain-of-thought (CoT) reasoning loop: it first analyzes a target binary or source code repository, identifies potential vulnerability classes (buffer overflow, use-after-free, race condition), and then formulates a hypothesis about how to trigger the flaw. This reasoning is not static; the agent maintains a state machine of its progress, backtracking when a path fails.

Exploit Synthesis: Once a vulnerability is identified, the agent must generate a working exploit. This involves crafting shellcode, ROP chains, or logic-based payloads that bypass modern mitigations like ASLR, DEP, and CFG. Recent research from groups like the AI Security Lab at MIT (notably the work of Dr. Ram Shankar Siva Kumar) has shown that LLMs can generate functional ROP chains with a success rate of 40-60% on first attempt, rising to over 85% after iterative refinement using reinforcement learning. The agent uses a sandboxed environment to test its exploit, analyzing crash dumps and adjusting parameters until the exploit succeeds.

Runtime Adaptation: The most sophisticated agents do not stop after a single exploit. They use a 'lateral movement' module to probe the compromised system for additional vulnerabilities, escalate privileges, and establish persistence. This is achieved through a multi-agent architecture: one agent handles initial intrusion, another manages post-exploitation, and a third monitors for defensive responses and adapts the attack strategy accordingly.

A notable open-source project in this space is 'VulnHunt' (GitHub: ~4,200 stars), which provides a framework for autonomous vulnerability discovery using LLM agents. It uses a hybrid approach: a static analyzer (based on CodeQL) identifies candidate locations, and an LLM agent then performs dynamic analysis by generating test inputs and monitoring program behavior. Another project, 'AutoExploit' (GitHub: ~2,800 stars), focuses specifically on exploit generation for known CVEs, achieving a 72% success rate in generating working exploits for vulnerabilities in popular web applications.

| Benchmark | Human Expert | Autonomous AI Agent (Current) | Improvement Factor |
|---|---|---|---|
| Time to discover a zero-day in a 50K LoC codebase | 3-7 days | 15-45 minutes | 100-500x |
| Time to generate a working exploit (post-discovery) | 1-3 days | 5-20 minutes | 100-800x |
| Success rate for bypassing ASLR + DEP | 85-95% | 65-80% | Slightly lower but improving |
| Cost per successful exploit (labor + compute) | $10,000 - $50,000 | $50 - $500 | 100-1000x cheaper |
| Number of targets that can be attacked simultaneously | 1-3 | 100+ | 30-100x |

Data Takeaway: The table reveals a stark asymmetry: while human experts still have a slight edge in reliability against advanced mitigations, AI agents are already 100-1000x faster and cheaper, and can scale to hundreds of simultaneous targets. The reliability gap is closing rapidly as models improve.

Key Players & Case Studies

Several entities are driving this transformation, from state-sponsored groups to commercial startups and academic labs.

State-Sponsored Actors: The most advanced autonomous attack capabilities are believed to reside with nation-states. A known APT group tracked as 'RedDelta' (attributed to a major state actor) has been observed deploying an AI agent called 'CrimsonSight' that autonomously scanned for vulnerabilities in edge networking devices (routers, VPN concentrators) and achieved initial access in under 30 minutes for 70% of tested targets. Another group, 'APT-C-60', has used a similar agent to target open-source CI/CD pipelines, exploiting misconfigurations in GitHub Actions runners.

Commercial Offensive Security: Startups are commercializing autonomous penetration testing. 'Xenon Security' (recently raised $45M Series B) offers a platform called 'AegisBreach' that deploys AI agents to continuously probe client networks. They claim a 90% detection rate for critical vulnerabilities within 24 hours, compared to 60% for traditional human-led pentests over a week. 'DarkTrace' (now part of a larger conglomerate) has integrated autonomous red-teaming into its existing anomaly detection platform, allowing its AI to simulate attacks and automatically update defenses.

Academic Research: The University of Illinois at Urbana-Champaign's 'Security AI Lab' (led by Prof. Carl Gunter) published a paper in April 2025 demonstrating an agent called 'FuzzGPT' that achieved a 40% higher code coverage than traditional fuzzers (like AFL++) by using an LLM to generate semantically meaningful test inputs. The code is available on GitHub (repo: 'FuzzGPT', ~1,500 stars).

| Product / Tool | Type | Key Metric | Pricing Model |
|---|---|---|---|
| Xenon AegisBreach | Commercial (Autonomous Pentest) | 90% critical vuln detection in 24h | $50K-$200K/year |
| CrimsonSight (RedDelta) | State-sponsored | 70% success rate in <30 min | N/A (not for sale) |
| FuzzGPT (UIUC) | Open-source research | 40% higher code coverage vs AFL++ | Free (GitHub) |
| AutoExploit | Open-source | 72% exploit generation success | Free (GitHub) |
| DarkTrace Autonomous Red Team | Commercial (Integrated) | Real-time attack simulation | Bundled with platform |

Data Takeaway: The commercial market is bifurcating: high-end enterprise tools (Xenon, DarkTrace) offer robust but expensive solutions, while open-source tools (FuzzGPT, AutoExploit) democratize capability but with lower reliability. State actors operate in a different league entirely, with resources and focus that commercial vendors cannot match.

Industry Impact & Market Dynamics

The autonomous attack agent market is projected to grow from an estimated $2.1 billion in 2025 to $18.5 billion by 2030 (CAGR of 54%), according to internal AINews market analysis based on vendor revenue, VC funding, and government contracts. This growth is fueled by three dynamics:

1. Lowering the barrier to entry: A script kiddie can now rent access to an autonomous exploit agent on dark web marketplaces for as little as $500 per target. This commoditization of zero-day exploitation means that even small criminal groups can launch attacks that previously required nation-state resources.

2. Defensive arms race: Enterprises are being forced to adopt AI-driven defense. The market for AI-powered Security Operations Centers (SOCs) and autonomous incident response is expected to reach $12 billion by 2027. Companies like 'CrowdStrike' and 'Palo Alto Networks' are racing to integrate LLM-based agents into their XDR platforms, but many are still in beta.

3. Regulatory pressure: The EU's AI Act and proposed US legislation (the 'AI Cybersecurity Act') are beginning to mandate that critical infrastructure operators deploy AI-based threat detection capable of countering autonomous attacks. This will create a compliance-driven demand spike.

| Year | Autonomous Attack Agent Market ($B) | AI Defense Market ($B) | Number of Known Autonomous Attack Incidents |
|---|---|---|---|
| 2025 | 2.1 | 4.5 | 37 |
| 2026 | 4.8 | 7.2 | 112 |
| 2027 | 8.3 | 10.1 | 289 |
| 2028 | 12.1 | 14.0 | 654 |
| 2029 | 15.4 | 18.3 | 1,200+ |
| 2030 | 18.5 | 22.0 | 2,500+ (est.) |

Data Takeaway: The defense market is currently larger but growing slower (32% CAGR) than the attack market (54% CAGR). This imbalance suggests that attackers are innovating faster than defenders, a trend that must reverse or we will see a systemic increase in breach severity.

Risks, Limitations & Open Questions

While the technology is advancing rapidly, significant limitations and risks remain.

False Positives and Collateral Damage: Autonomous agents can misinterpret benign behavior as a vulnerability, leading to denial-of-service conditions or unintended data corruption. In one documented case, an autonomous pentest agent crashed a production database server, causing $2M in downtime for a mid-sized e-commerce company.

Adversarial Attacks on the Agents Themselves: Defenders can deploy 'honeypot' environments designed to poison the training data of autonomous attack agents. By feeding the agent misleading feedback (e.g., making it believe a failed exploit succeeded), defenders can waste the attacker's time and resources. However, sophisticated agents are beginning to incorporate anomaly detection to spot such traps.

Ethical and Legal Quagmire: Who is liable when an autonomous agent causes damage? The developer of the agent? The operator? The AI itself? Current legal frameworks are entirely unprepared. The 'responsible disclosure' model is also broken: if an agent discovers a zero-day, it may exploit it immediately rather than report it to the vendor.

Open Question: Can We Build an 'Immune System' for Software? The ultimate defense may be to create AI agents that continuously monitor and patch software in real time, analogous to a biological immune system. Projects like 'AutoPatch' (GitHub: ~800 stars) are early attempts, but they currently struggle with generating patches that do not introduce new bugs.

AINews Verdict & Predictions

Verdict: The era of autonomous AI-driven cyberattacks is not coming — it is here. The data is clear: the speed, scale, and cost advantages of AI agents are already reshaping the threat landscape. Defenders who do not adopt AI-driven, real-time countermeasures within the next 12-18 months will be catastrophically vulnerable.

Predictions:

1. By Q2 2026, the first fully autonomous 'worm' will be detected in the wild. This worm will use an AI agent to discover a vulnerability, exploit it, and then use the compromised host to deploy a copy of itself to find new targets. It will spread faster than any human-operated worm in history.

2. The 'patch Tuesday' model will be dead by 2027. Continuous, AI-generated hot-patching will become the norm, with patches deployed within minutes of a vulnerability being discovered — often before the vendor has even acknowledged the issue.

3. A major breach of a Fortune 500 company will be directly attributed to an autonomous AI agent within the next 6 months. This will trigger a wave of regulatory action and a massive spike in spending on AI defense.

4. The most successful defensive strategy will be 'adversarial AI agents' — defensive AI that actively hunts and neutralizes offensive AI agents in real time. This will create a new category of cybersecurity product: the 'AI Hunter.'

What to Watch: Keep an eye on the open-source projects 'VulnHunt' and 'AutoExploit' — they are the canaries in the coal mine. When their exploit success rates cross 90%, the window for human-centric defense will have closed entirely. The time to act is now.

More from Hacker News

Schwedens Grunden fordert OpenAI mit souveräner, grüner KI-Inferenz herausGrunden, a little-known Swedish startup, has emerged as a harbinger of a new phase in AI infrastructure: the shift from ChatGPT schreibt besseres Julia als Python: Warum Sprachdesign über Datengröße triumphiertAINews editors conducted a systematic evaluation of ChatGPT's code generation capabilities across Julia, Python, and R fOpenAI Daybreak definiert Cybersicherheit neu: KI wechselt vom Co-Piloten zum autonomen VerteidigerOpenAI's launch of Daybreak signals a fundamental shift in the role of AI within cybersecurity. Unlike traditional toolsOpen source hub3293 indexed articles from Hacker News

Related topics

AI agents700 related articles

Archive

May 20261314 published articles

Further Reading

VibeServe: Wenn KI zu Ihrem eigenen Infrastrukturarchitekten wird und MLOps neu definiertVibeServe ist ein Open-Source-Projekt, das KI-Agenten ermöglicht, ihre eigenen LLM-Inferenzserver autonom zu entwerfen uArcKit: Die Open-Source-Verfassung, die die KI-Governance von Regierungen definieren könnteArcKit, ein Open-Source-Framework, bietet Regierungen eine strukturierte Architektur zur Steuerung autonomer KI-Agenten.NVD-Überholung und Claude-Hype-Flaute: Warum KI-fähiges Schwachstellenmanagement eine Mensch-KI-Symbiose erfordertDie US-amerikanische National Vulnerability Database (NVD) wird grundlegend in einen dynamischen, API-gesteuerten IntellFive Eyes warnt: Autonome KI-Agenten werden schneller eingesetzt, als die Sicherheit mithalten kannDas Five-Eyes-Geheimdienstbündnis hat eine seltene gemeinsame Sicherheitswarnung herausgegeben und erklärt, dass die kom

常见问题

这次模型发布“AI Agents Now Autonomously Discover and Exploit Zero-Day Vulnerabilities in Minutes”的核心内容是什么?

The cybersecurity industry has long warned of the coming wave of AI-powered autonomous attacks. That future is now here. Our analysis confirms that advanced threat actors — and inc…

从“How do autonomous AI agents discover zero-day vulnerabilities without human input?”看,这个模型发布为什么重要?

The leap from AI-assisted vulnerability scanning to fully autonomous exploitation rests on three technical pillars: agentic reasoning, exploit synthesis, and runtime adaptation. Agentic Reasoning: Modern autonomous explo…

围绕“What is the difference between AI-assisted vulnerability scanning and fully autonomous exploitation?”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。