Technical Deep Dive
Voltage glitching is a form of fault injection attack that exploits the physical properties of a chip. By introducing precise, transient dips in the power supply voltage (typically lasting nanoseconds to microseconds), an attacker can cause a processor to skip instructions, corrupt memory reads, or alter control flow. The classic target is the secure boot sequence: if a glitch occurs at the exact moment the processor is checking a digital signature, it can skip the verification and load unsigned firmware.
What makes this breakthrough so significant is the level of autonomy Claude Code demonstrated. The researchers provided the AI with a high-level goal: 'Bypass the secure boot on this STM32 microcontroller using voltage glitching.' Claude Code then:
1. Proposed glitching parameters: It selected a voltage dip depth (e.g., 1.2V to 0.8V), duration (e.g., 50ns), and timing offset relative to the boot sequence start.
2. Wrote FPGA bitstream configuration: The attack required a fast-switching power supply controlled by an FPGA. Claude Code generated the Verilog code for the FPGA to produce the precise glitch waveform.
3. Implemented the host-side control script: It wrote a Python script that communicated with the FPGA, triggered the glitch at the right moment, and monitored the device's output.
4. Iterated after failure: The first attempt failed. Claude Code analyzed the serial output (which showed a 'signature fail' message), adjusted the timing offset by 15ns, and succeeded on the second attempt.
This workflow mirrors what a human hardware hacker would do, but at machine speed. The key enabler is the model's ability to reason about timing diagrams, voltage levels, and FPGA logic—domains that are far removed from typical coding tasks. The underlying architecture is likely a combination of Claude's large language model (LLM) core, which provides general reasoning, and a code execution sandbox that allows the agent to test and debug in real-time. The researchers used the Claude Code CLI tool, which provides a REPL-like interface where the model can run commands, read outputs, and modify its approach.
A relevant open-source project in this space is ChipWhisperer (GitHub: newaetech/chipwhisperer, 3.2k stars), a toolchain for side-channel analysis and fault injection. While Claude Code did not use ChipWhisperer directly, the attack methodology is identical. Another is PicoGlitcher (GitHub: robertguetzkow/pico-glitcher, 500+ stars), a low-cost voltage glitching platform. The fact that Claude Code could generate the FPGA configuration from scratch suggests that the model has internalized the principles of digital logic design, not just pattern-matched code snippets.
| Attack Stage | Human Expert Time | Claude Code Time | Key Difference |
|---|---|---|---|
| Parameter proposal | 2-4 hours (trial & error) | 2 minutes | AI uses internal knowledge of chip behavior |
| FPGA bitstream writing | 4-8 hours (Verilog debugging) | 5 minutes | AI generates synthesizable code directly |
| Host script writing | 1-2 hours | 30 seconds | AI handles serial/GPIO protocols |
| First failure debugging | 1-3 hours (oscilloscope analysis) | 10 seconds (log analysis) | AI reads serial output instantly |
| Total time to successful attack | 8-17 hours | ~8 minutes | 60-120x speedup |
Data Takeaway: The table shows a dramatic compression of the attack development timeline. The most impressive gain is in debugging: where a human would need to probe with an oscilloscope, the AI can parse textual logs and adjust parameters in seconds. This suggests that the bottleneck for AI-driven hardware attacks is no longer the cognitive work, but the physical setup (e.g., connecting the FPGA to the target).
Key Players & Case Studies
This research was conducted by a team that has not publicly named itself, but the methodology is directly tied to Anthropic's Claude Code tool. Anthropic has positioned Claude as a 'safe' AI assistant, but this demonstration reveals a fundamental tension: the same capabilities that make Claude useful for debugging firmware can be weaponized.
The target device was an STM32F4 microcontroller, a common part used in IoT devices, medical equipment, and automotive systems. The secure boot implementation was the standard STM32 ROM bootloader, which checks a signature on the first sector of flash. This is a well-known target in the hardware hacking community—it has been attacked before using ChipWhisperer and other tools—but never by an AI.
Other notable players in the hardware security AI space include:
- Google's Project Zero: While focused on software, they have recently published research on using LLMs to find memory corruption bugs. The jump to hardware is a natural extension.
- MIT's CSAIL: Researchers have used reinforcement learning to optimize fault injection parameters, but their work required custom simulators and thousands of trials. Claude Code did it with a single prompt.
- Riscure: A commercial hardware security testing firm that uses automated fault injection tools. Their tools are expensive (€50k+ per license) and require expert operators. Claude Code could democratize this capability.
| Entity | Approach | Cost | Accessibility | Time to First Attack |
|---|---|---|---|---|
| Human expert (Riscure) | Manual trial & error | €50k+ (equipment) | Low (requires years of training) | 8-17 hours |
| RL-based AI (MIT CSAIL) | Simulated training | High (compute + simulator) | Low (needs custom setup) | Days |
| Claude Code (this work) | Zero-shot generation | $0.03/query | High (API key + $50 FPGA) | 8 minutes |
Data Takeaway: The Claude Code approach is orders of magnitude cheaper and faster than existing methods. The hardware cost—a $50 FPGA board and a $20 microcontroller—is trivial. This democratization is the core threat: any actor with an API key can now attempt hardware attacks that were previously the domain of well-funded labs.
Industry Impact & Market Dynamics
The immediate impact is on the embedded security market, valued at $5.2 billion in 2024 and projected to grow to $9.8 billion by 2030 (CAGR 11.2%). This growth is driven by IoT adoption, but the security solutions—secure boot, hardware security modules (HSMs), trusted platform modules (TPMs)—are all designed to resist human attackers. An AI attacker that can iterate at machine speed changes the threat model.
Consider the automotive sector: modern cars have 100+ ECUs, each with secure boot. A human attacker might take weeks to find a single vulnerability. An AI agent, given access to a test bench, could scan all ECUs in hours. The same applies to medical devices (pacemakers, insulin pumps), industrial controllers (PLCs), and consumer IoT (smart locks, cameras).
The market for automated hardware security testing is nascent but poised for disruption. Companies like Riscure and Brightsight offer manual testing services at $200-500/hour. An AI-powered tool that can run 24/7 and cost pennies per attack could undercut them by 100x. However, the same technology could be used by malicious actors to find zero-day hardware vulnerabilities at scale.
| Market Segment | Current Security Spend | AI Attack Risk Level | Potential Disruption |
|---|---|---|---|
| Automotive ECUs | $1.2B | High | AI could find vulnerabilities in all models within days |
| Medical IoT | $800M | Critical | Life-critical devices become testable by script kiddies |
| Smart Home | $600M | Medium | Low-cost devices become trivial to hack |
| Industrial Control | $1.5B | High | Nation-state actors gain automated attack capability |
Data Takeaway: The automotive and industrial sectors are most at risk due to the high number of embedded devices and the criticality of secure boot. The medical sector faces the most severe consequences: a vulnerability in a pacemaker could be exploited by an AI in minutes, with no human oversight.
Risks, Limitations & Open Questions
The most immediate risk is the weaponization of this capability. While the researchers used Claude Code for legitimate security research, there is no technical barrier preventing a malicious actor from doing the same. The attack requires physical access to the device, but for many IoT devices (e.g., smart meters, security cameras), physical access is feasible.
A key limitation is that Claude Code still requires a human to set up the hardware—connecting the FPGA, power supply, and target device. This is not yet a 'push-button' attack. However, as robotics and AI converge, it is easy to imagine a future where a robotic arm, guided by an LLM, connects the probes automatically.
Another open question is the reproducibility of this result. The researchers used a specific version of Claude (likely Claude 3.5 Sonnet or Claude 4 Opus) and a specific target. Would the same approach work on a different microcontroller (e.g., an ARM Cortex-A series) or a different secure boot implementation (e.g., UEFI Secure Boot)? The underlying principles are general, but the specifics of timing and voltage levels vary by chip.
Ethically, this research raises questions about responsible disclosure. The researchers have not published the exact prompts or the generated code, but the methodology is described in enough detail that others could replicate it. Anthropic's terms of service prohibit using Claude for 'malicious code generation,' but the line between security research and malicious activity is blurry.
AINews Verdict & Predictions
Verdict: This is a watershed moment. We have crossed the Rubicon from AI as a software tool to AI as a physical-world attacker. The implications are not theoretical—they are demonstrated in hardware.
Prediction 1: Within 12 months, we will see the first automated hardware vulnerability scanner powered by an LLM. This will be a commercial product, likely from a startup, that combines an FPGA-based glitcher with an AI agent that can target any embedded device. The price point will be under $1,000, making it accessible to small security firms and hobbyists.
Prediction 2: Hardware security companies will pivot to AI-resistant designs. Expect to see 'glitch detectors' that monitor voltage rails and reset the chip if a glitch is detected, as well as 'AI-hardened' secure boot implementations that use randomized timing or redundant signature checks.
Prediction 3: The alignment community will be forced to confront physical-world risks. Current AI safety research focuses on text and code. This demonstration shows that an AI can cause physical harm (e.g., disabling a medical device) through code alone. Expect new guidelines for 'hardware-aware' red teaming.
Prediction 4: Anthropic will face pressure to restrict Claude Code's capabilities. The company will likely add a 'hardware attack' filter that blocks prompts related to voltage glitching, FPGA configuration, or secure boot bypass. But this is a cat-and-mouse game—open-source models (e.g., Llama 3, Mistral) will not have such restrictions.
What to watch next: The open-source community. If a repository appears on GitHub that combines a $50 FPGA board with a Claude API wrapper to automate voltage glitching, the genie is truly out of the bottle. We will be watching.