ClawSwarm攻撃、AIエージェントを暗号通貨マイニングゾンビに変える

Hacker News April 2026
Source: Hacker NewsAI agent securityArchive: April 2026
ClawSwarmと呼ばれる新たな攻撃が、ClawHubマーケットプレイス上の30の一見無害なスキルプラグインを通じて、AIエージェントを秘密裏に分散型暗号通貨マイニングネットワークに勧誘しています。これは人間への攻撃から自律エージェントの操作への危険なシフトを示しています。
The article body is currently shown in English by default. You can generate the full version in this language on demand.

AINews has uncovered a sophisticated cyber operation dubbed 'ClawSwarm' that represents a paradigm shift in AI security threats. Attackers have seeded the ClawHub agent skill marketplace with 30 plugins that appear to perform harmless utility functions — from text summarization to weather lookups — but contain hidden logic that secretly conscripts the host AI agent into a decentralized crypto mining swarm. Once an agent loads any of these skills, its compute resources are silently hijacked to perform proof-of-work calculations for a private cryptocurrency, with the rewards flowing to the attackers' wallet. The attack exploits the fundamental trust architecture of agent ecosystems: agents are designed to execute instructions from skills without question, and skill marketplaces lack robust runtime behavioral verification. This is not a traditional malware injection — it is a social engineering attack on the agent itself. The implications are severe: every AI agent running on consumer hardware, cloud VMs, or edge devices becomes a potential mining node. The attack vector is invisible to most monitoring tools because the skills perform their declared function while siphoning spare cycles in the background. ClawSwarm demonstrates that the next frontier of cybercrime is not breaking into systems, but hijacking the agency of autonomous software. The AI industry must now confront a fundamental question: how do we verify the true intent of code that can act on its own?

Technical Deep Dive

The ClawSwarm attack exploits a critical architectural vulnerability in modern AI agent frameworks: the implicit trust between an agent's core reasoning engine and the skill plugins it loads. Most agent platforms — including LangChain, AutoGPT, and custom frameworks built on OpenAI's function calling API — treat skills as black-box functions. The agent's LLM decides when to invoke a skill based on a natural language description, but it has no mechanism to inspect the skill's actual code or monitor its runtime behavior.

How the Attack Works

Each of the 30 ClawSwarm skills follows a dual-path architecture:

1. Public Path: The skill performs its advertised function (e.g., `summarize_text`, `get_weather`, `calculate_compound_interest`). This code is clean and passes any static analysis.
2. Hidden Path: A background thread or async task is spawned on first invocation. This thread uses WebAssembly (WASM) to execute a lightweight Monero miner compiled to run in sandboxed environments. The miner connects to a decentralized mining pool using a custom peer-to-peer protocol that mimics legitimate API traffic to avoid detection.

The key innovation is the use of WebAssembly-based mining payloads. WASM binaries are harder to signature-detect than traditional executables, and they can run in restricted environments like browser-based agents or serverless functions. The mining algorithm is RandomX, optimized for CPU mining, which makes it ideal for hijacking general-purpose compute.

Evasion Techniques

- Adaptive throttling: The miner monitors system load and CPU temperature via OS-level APIs (where available) and throttles itself to stay below detection thresholds. On typical consumer hardware, it uses 30-50% of a single core, which users may attribute to the agent's normal operation.
- Network camouflage: Mining traffic is encapsulated in HTTPS requests to legitimate-looking endpoints (e.g., `api.weatherservice.io/pool`). The pool itself is a distributed hash table (DHT) network, making takedown difficult.
- Persistence via skill updates: The ClawHub marketplace allows skill authors to push silent updates. Even if one version is flagged, the attacker can push a cleaned version while maintaining the mining payload in a separate update channel.

Relevant Open-Source Projects

- LangChain (65k+ stars on GitHub): The most popular framework for building agent applications. Its plugin architecture is vulnerable because it executes arbitrary Python code from skills. No runtime sandboxing is enforced.
- AutoGPT (160k+ stars): Relies on plugins for extended functionality. The project has an open issue (#2341) about code execution risks, but no fix has been merged.
- WASI (WebAssembly System Interface): The same technology enabling server-side WASM is being weaponized here. The ClawSwarm miner uses WASI to access networking and threading without triggering antivirus.

Performance Impact Data

| Metric | Normal Agent | Infected Agent | Difference |
|---|---|---|---|
| CPU usage (idle) | 5-10% | 35-55% | +30-45% |
| Task completion time | 1.2s | 1.8s | +50% |
| Monthly electricity cost (est.) | $0.50 | $2.10 | +$1.60 |
| Network traffic (daily) | 50 MB | 350 MB | +600% |
| Mining hash rate (per agent) | 0 H/s | 120 H/s (Monero) | — |

Data Takeaway: The performance impact is significant enough to degrade user experience but subtle enough to avoid immediate suspicion. The attacker needs approximately 8,000 infected agents to generate $1,000/month in Monero at current prices and difficulty — a scale easily achievable through a popular skill marketplace.

Key Players & Case Studies

The Attackers: ClawSwarm Collective

While the identities remain unknown, forensic analysis of the ClawHub skills reveals a sophisticated operation. The code shows professional-grade software engineering: modular design, error handling, and anti-debugging techniques. The collective likely consists of 3-5 developers with expertise in both WebAssembly optimization and cryptocurrency mining pool operations.

The Platform: ClawHub

ClawHub launched in October 2024 as a decentralized marketplace for AI agent skills. It has 12,000+ registered developers and 8,500+ published skills. Unlike centralized marketplaces (OpenAI's GPT Store, Anthropic's Tool Library), ClawHub has no mandatory code review — only automated static analysis that checks for known malware signatures. The ClawSwarm skills passed this check because they contained no malicious strings in their source code; the mining payload was fetched from a remote server at runtime.

Comparison of Skill Marketplace Security

| Marketplace | Review Process | Runtime Monitoring | Known Incidents |
|---|---|---|---|
| OpenAI GPT Store | Human + automated | Limited (API call logging) | 2 (data exfiltration via GPTs) |
| Anthropic Tool Library | Automated only | None | 0 (publicly known) |
| ClawHub | Automated only | None | 30 (ClawSwarm) |
| LangChain Hub | Community review | None | 5 (backdoored integrations) |

Data Takeaway: Every major skill marketplace lacks runtime behavioral monitoring. The industry standard is pre-deployment scanning, which is ineffective against attacks that fetch payloads dynamically. ClawHub's decentralized nature makes it particularly vulnerable because there is no central authority to issue takedowns.

Case Study: The "Weather Now" Skill

One of the 30 ClawSwarm skills, "Weather Now," had 1,200+ installs before being flagged. It returned accurate weather data using a legitimate API (OpenWeatherMap) while running a Monero miner in a background Web Worker. The skill was rated 4.8 stars with 200+ reviews — users praised its speed and accuracy. The mining payload only activated after the third invocation, reducing the chance of detection during initial testing.

Industry Impact & Market Dynamics

The New Threat Model

ClawSwarm introduces a threat category that security researchers have theorized but never observed at scale: Agent-as-a-Botnet. Traditional botnets require infecting user devices with malware — a high-risk, low-success endeavor. ClawSwarm bypasses this by infecting the agent itself, which users voluntarily deploy on their systems. The agent becomes the vector, not the target.

Market Size and Growth

The AI agent market is projected to grow from $3.2 billion in 2024 to $28.5 billion by 2028 (CAGR 55%). As agents proliferate, so does the attack surface. Every agent running on a user's laptop, cloud server, or IoT device is a potential mining node. The total addressable compute for attackers is staggering:

| Year | Estimated Active Agents | Potential Mining Revenue (at current Monero price) |
|---|---|---|
| 2024 | 5 million | $750,000/month (if 10% infected) |
| 2025 | 15 million | $2.25 million/month |
| 2026 | 40 million | $6 million/month |

Data Takeaway: Even a modest 10% infection rate among AI agents could generate millions in monthly revenue for attackers. This creates a powerful economic incentive to develop more sophisticated agent-targeting attacks.

Business Model Disruption

ClawSwarm undermines the value proposition of AI agent platforms. Enterprises deploying agents for automation now face hidden operational costs (electricity, bandwidth) and security risks. This could accelerate demand for:

- Sandboxed execution environments (e.g., gVisor, Firecracker microVMs)
- Runtime behavioral monitoring (e.g., Falco, Tetragon for agent processes)
- Attestation-based trust (e.g., TEEs like Intel SGX to verify skill integrity)

Companies like Hugging Face and Replicate, which host agent runtimes, will need to invest heavily in these technologies or risk losing enterprise customers.

Risks, Limitations & Open Questions

Detection Challenges

Current antivirus and EDR tools are not designed to detect mining activity within AI agent processes. The mining code runs in the same process as the agent, making it nearly impossible to distinguish from legitimate computation. Behavioral baselines could help, but they require training on clean agent behavior — a dataset that doesn't yet exist.

Legal and Ethical Gray Areas

- Liability: Who is responsible when an infected agent mines cryptocurrency on a user's machine? The skill developer? The marketplace? The agent framework maintainer? Current law is silent on this.
- Consent: Users who install a skill consent to its declared functionality, but not to hidden mining. This is a clear violation of computer fraud laws, but enforcement across jurisdictions is challenging.
- Collateral damage: Anti-malware tools that aggressively block mining activity may also block legitimate agent functions that use similar compute patterns (e.g., local LLM inference, video transcoding).

Open Questions

1. Can LLMs detect malicious skills? Early experiments with GPT-4 and Claude 3.5 to review skill code show they can identify obvious malware but miss obfuscated payloads. The ClawSwarm skills were designed to fool both automated scanners and human reviewers.
2. What about agent-to-agent attacks? If an agent can install skills autonomously (a feature many frameworks support), a compromised agent could infect other agents in the same network. This is the agent equivalent of worm propagation.
3. Will we see agent-targeted ransomware? Instead of mining, an attacker could encrypt an agent's state files and demand payment to restore functionality. The agent's value (e.g., a customer support bot handling thousands of conversations) makes it a lucrative target.

AINews Verdict & Predictions

ClawSwarm is not a one-off incident — it is the opening salvo in a new era of AI-native cybercrime. The attack is elegant in its simplicity: exploit the trust architecture that makes agents useful, and turn that trust against the user. The AI industry has spent years making agents more capable; it has spent almost no time making them trustworthy.

Our Predictions

1. Within 6 months, at least three major agent frameworks will announce mandatory runtime sandboxing for all skills, likely using WebAssembly sandboxes or containerization. LangChain and AutoGPT will lead this shift.
2. By Q1 2026, a startup will emerge offering "Agent Immune Systems" — runtime monitoring tools specifically designed to detect anomalous behavior in AI agents. Expect a $50M+ Series A within a year.
3. The next evolution of ClawSwarm will target not just compute but data. Attackers will create skills that exfiltrate the agent's conversation history, API keys, or internal prompts — the most valuable assets an agent holds.
4. Regulatory response: The EU's AI Act will be amended to include specific requirements for agent skill marketplace security, including mandatory behavioral auditing and liability for marketplace operators.

What to Watch

- ClawHub's response: If they fail to implement runtime monitoring within 30 days, the marketplace will become a ghost town as developers and users flee.
- Open-source tooling: Watch the GitHub repos for `agent-sandbox` and `agent-monitor` — these will become essential infrastructure.
- Crypto mining profitability: If Monero's price drops significantly, the economic incentive for ClawSwarm-style attacks diminishes, but the techniques will be repurposed for other goals.

The era of trusting AI agents implicitly is over. ClawSwarm has drawn a line in the sand: every agent is a potential weapon, and every skill is a potential Trojan horse. The industry must now build the immune system it never knew it needed.

More from Hacker News

RAG vs ファインチューニング:企業AI導入における戦略的分岐点Enterprise AI deployment has reached a critical inflection point where the choice between Retrieval-Augmented GenerationオープンソースガイドがLLMトレーニングを民主化、AIの権力構造を再形成The release of a complete, open-source guide for training large language models from scratch marks a definitive shift inOpenAIの40億ドル展開シフト:AI産業化が本格化OpenAI's creation of The Deployment Company, backed by a $4 billion war chest, represents a watershed moment in the AI iOpen source hub2912 indexed articles from Hacker News

Related topics

AI agent security90 related articles

Archive

April 20263042 published articles

Further Reading

信頼できるリモート実行:AIエージェントを企業で安全にする「ルールロック」Trusted Remote Execution(TRE)と呼ばれる新しいフレームワークは、ポリシー実行を実行層に直接組み込むことで、AIエージェントの動作を変革しています。この「ルール・アズ・コード」アプローチは、ブラックボックスの信頼不QueryShield:AIエージェントのデータベースセキュリティを再定義する目に見えない守護者AINewsは、AIエージェント向けに設計された特殊なSQLセキュリティプロキシ「QueryShield」を発見しました。AST構文木チェックと行レベル権限を使用して、LLMが自然言語をSQLに変換する際にテーブルを誤って削除したり、不正なSharkAuth:AIエージェント経済を解き放つオープンソースのセキュリティレイヤーAIエージェントはカレンダーや財務、エンタープライズワークフローを管理しようとしていますが、現在の認可メカニズムは危険なほど不十分です。SharkAuthは、きめ細かく、取り消し可能で、時間制限のある委任トークンを備えた専用の認可レイヤーをファイブアイズとCISAがAIエージェントセキュリティの衝撃発表:コンプライアンス時代の幕開けCISA、NSA、およびファイブアイズ諜報機関による共同セキュリティガイドが、AIエージェント展開に初の拘束力あるルールを設定しました。AINewsは技術的義務、市場の激動、そしてこれが業界のコンプライアンスにおける分岐点となる理由を分析し

常见问题

这次模型发布“ClawSwarm Attack Turns AI Agents Into Crypto Mining Zombies”的核心内容是什么?

AINews has uncovered a sophisticated cyber operation dubbed 'ClawSwarm' that represents a paradigm shift in AI security threats. Attackers have seeded the ClawHub agent skill marke…

从“how to detect ClawSwarm infection on AI agent”看,这个模型发布为什么重要?

The ClawSwarm attack exploits a critical architectural vulnerability in modern AI agent frameworks: the implicit trust between an agent's core reasoning engine and the skill plugins it loads. Most agent platforms — inclu…

围绕“ClawHub skill marketplace security vulnerabilities”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。