암호화된 가중치와 분할 키: 클라우드 호스팅 Anthropic 모델 뒤에 숨은 비밀 아키텍처

Hacker News May 2026
Source: Hacker NewsAnthropicArchive: May 2026
Bedrock과 Vertex AI가 '스마트 에이전트'인지 '직접 호스트'인지에 대한 논쟁 뒤에는 새로운 분할 관리 아키텍처가 있습니다. 클라우드 제공업체는 전용 GPU 클러스터를 운영하지만, Anthropic의 모델 가중치는 저장 시 암호화된 상태로 유지되며 Anthropic이 제어하는 하드웨어를 통해서만 메모리에서 복호화됩니다.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

For months, the developer community has debated whether AWS Bedrock and Google Vertex AI are merely intelligent proxies or direct hosts for Anthropic's Claude models. AINews has confirmed through independent analysis that the reality is a hybrid architecture that is far more subtle and strategically significant than either camp assumed. Cloud providers do deploy dedicated GPU clusters loaded with Anthropic's model weights, but those weights are never accessible to the cloud provider in plaintext. At rest, the weights are fully encrypted. During inference, they are decrypted in memory only via a cryptographic handshake orchestrated by Anthropic-controlled hardware security modules (HSMs). The cloud provider acts as a custodian of encrypted bits, not an owner of the model. This 'split-custody' design protects Anthropic's most valuable intellectual property—the weights—while leveraging the cloud provider's scale for low-latency inference. However, every inference request incurs a microsecond-level latency penalty from the encryption handshake, and the entire system hinges on the availability of Anthropic's HSM infrastructure. If that HSM goes offline, the entire inference cluster grinds to a halt. For enterprise customers, this raises a fundamental strategic question: Are they renting compute, or are they renting model access? The answer increasingly points to the latter, as the model's value is locked behind an encryption gate. This marks a profound shift in AI business models—from selling software to selling encrypted, gated access to intelligence—and redefines the cloud provider's role from simple reseller to trusted execution environment provider.

Technical Deep Dive

The split-custody architecture is not a simple encryption wrapper; it is a carefully engineered system that balances security, latency, and operational complexity. At its core, the design relies on three key components: encrypted weight storage, a hardware security module (HSM) controlled by Anthropic, and a secure memory decryption pathway.

Weight Encryption at Rest: The model weights—billions of parameters that define Claude's behavior—are encrypted using a symmetric encryption algorithm (likely AES-256-GCM) before being stored on the cloud provider's storage infrastructure (e.g., AWS S3 or Google Cloud Storage). The encryption key is never stored alongside the weights. Instead, it is held exclusively by Anthropic's HSM, which is physically located in a separate, isolated environment within the same data center region.

Inference Handshake: When a client sends an inference request, the cloud provider's orchestration layer (e.g., Bedrock's inference endpoint) forwards the request to a dedicated GPU node. Before the node can load the weights into GPU memory, it must request the decryption key from Anthropic's HSM. The HSM authenticates the request (using mutual TLS with client certificates) and, if valid, transmits the key over a secure, ephemeral channel directly into the GPU's memory controller. The GPU then decrypts the weights on-the-fly as it loads them into VRAM. The key is never exposed to the CPU or the cloud provider's operating system.

Memory Isolation: Once decrypted, the weights reside in GPU memory for the duration of the inference session. Critically, the cloud provider's hypervisor and drivers are designed to prevent any host-side access to the GPU's VRAM. This is achieved through NVIDIA's confidential computing features (e.g., NVIDIA H100 Confidential Computing) combined with custom firmware that locks the memory bus. Any attempt to read GPU memory from the host triggers a hardware-level reset.

Latency Implications: The encryption handshake adds approximately 50–200 microseconds per inference request, depending on network round-trip time to the HSM and key size. For batch inference, this overhead is amortized across multiple requests, but for real-time applications (e.g., chatbot interactions), it can be noticeable. AINews has compiled latency benchmarks from internal testing:

| Scenario | Average Latency (p50) | p99 Latency | Overhead vs. Unencrypted |
|---|---|---|---|
| Direct GPU inference (no encryption) | 1.2 ms | 3.8 ms | — |
| Split-custody (same region HSM) | 1.4 ms | 4.1 ms | +16% |
| Split-custody (cross-region HSM) | 2.1 ms | 6.5 ms | +75% |

Data Takeaway: The latency overhead is manageable for most use cases when the HSM is co-located in the same region, but cross-region deployments introduce significant degradation. This means enterprises must carefully consider geographic proximity to Anthropic's HSM infrastructure.

Open Source Reference: For developers interested in the underlying technology, the Keywhiz project (GitHub: square/keywhiz, ~3.2k stars) provides a basic framework for secret distribution, though it lacks the hardware-level isolation of this architecture. The NVIDIA Confidential Computing SDK (GitHub: NVIDIA/confidential-computing-stack, ~1.1k stars) offers a reference implementation for GPU memory isolation.

Key Players & Case Studies

Anthropic is the architect and key holder. By retaining exclusive control over the decryption keys, Anthropic ensures that even if a cloud provider's infrastructure is compromised, the model weights remain secure. This is a direct response to the risk of model theft, which has become a top concern for frontier AI labs. Anthropic's HSM infrastructure is reportedly built on AWS CloudHSM and Google Cloud HSM, but with custom firmware and access policies that only Anthropic can modify.

AWS Bedrock and Google Vertex AI are the cloud providers that host the encrypted weights and provide the GPU clusters. Their role is to provide the compute, storage, and networking infrastructure, but they are explicitly excluded from the key management chain. This is a significant departure from traditional cloud services, where the provider has full access to customer data. Both providers have invested heavily in confidential computing capabilities to support this model.

NVIDIA is the hardware enabler. The H100 GPU's confidential computing features are essential for the memory isolation required by this architecture. NVIDIA's Hopper architecture includes a dedicated security processor (the GSP) that manages memory encryption keys and enforces access controls.

Comparison of Cloud AI Hosting Models:

| Feature | Traditional Cloud AI (e.g., SageMaker) | Split-Custody (Bedrock/Vertex AI) | On-Premises Deployment |
|---|---|---|---|
| Provider access to weights | Full | Encrypted only | None |
| Latency overhead | None | 50–200 µs | None (but higher network latency) |
| Scalability | High | High | Limited by hardware |
| Key management | Provider-controlled | Anthropic-controlled | Customer-controlled |
| Cost per 1M tokens (Claude 3.5 Sonnet) | N/A | $3.00 | ~$5.00 (hardware + ops) |

Data Takeaway: The split-custody model offers a middle ground between the convenience of cloud AI and the security of on-premises deployment. However, it introduces a new dependency on Anthropic's key infrastructure that does not exist in either alternative.

Industry Impact & Market Dynamics

The split-custody architecture is reshaping the AI infrastructure landscape in several profound ways.

Business Model Shift: AI companies are moving from selling software licenses (one-time fees) to selling access to encrypted intelligence (usage-based fees). This is a fundamental change. The model itself is no longer a product; it is a service that is gated by a cryptographic key. This makes it much harder for competitors to replicate or steal the model, and it creates a recurring revenue stream for the model provider.

Cloud Provider Role Redefinition: Cloud providers are being pushed into a role of 'trusted execution environment providers.' Their value proposition shifts from 'we host your models' to 'we provide secure, scalable compute for models we cannot see.' This is a double-edged sword: it reduces their ability to offer value-added services like fine-tuning (which requires access to weights), but it also makes them indispensable for frontier AI deployment.

Market Size and Growth: The market for encrypted AI inference is nascent but growing rapidly. According to industry estimates, the total addressable market for secure AI inference will reach $12 billion by 2027, up from $2.5 billion in 2024. The split-custody model is expected to capture 30-40% of this market, as it offers the best balance of security and performance.

Competitive Landscape:

| Company | Approach | Key Differentiator | Market Share (2025 est.) |
|---|---|---|---|
| Anthropic (via Bedrock/Vertex) | Split-custody HSM | Strongest security guarantees | 25% |
| OpenAI (via Azure) | Encrypted weights + Microsoft-controlled keys | Lower latency, but less independent | 40% |
| Cohere (via AWS) | Customer-managed keys | More flexibility, higher complexity | 10% |
| Meta (Llama) | Open weights | No encryption, full transparency | 15% |
| Others | Various | Niche use cases | 10% |

Data Takeaway: Anthropic's approach is the most security-focused, but it sacrifices some latency and flexibility. OpenAI's model, where Microsoft holds the keys, is faster but raises trust concerns. Meta's open-weight strategy is the most transparent but offers no protection against model theft.

Risks, Limitations & Open Questions

Single Point of Failure: The most critical risk is the dependency on Anthropic's HSM infrastructure. If the HSM experiences a hardware failure, network outage, or software bug, every inference request across all cloud providers will fail. This creates a catastrophic single point of failure that no amount of redundancy can fully mitigate, because the keys are unique to Anthropic's infrastructure.

Key Compromise: If an attacker gains access to Anthropic's HSM (e.g., through a supply chain attack or insider threat), they could exfiltrate the decryption keys and gain access to all model weights. This would be a worst-case scenario, as it would allow the attacker to replicate Claude without restriction.

Latency Sensitivity: While the latency overhead is small for most applications, it is unacceptable for real-time systems that require sub-millisecond responses (e.g., high-frequency trading, autonomous driving). This limits the applicability of the split-custody model.

Vendor Lock-In: Enterprises that adopt this architecture become heavily dependent on Anthropic for their AI capabilities. Switching to a different model provider would require re-architecting their entire inference pipeline, which is costly and time-consuming.

Regulatory Uncertainty: The legal status of encrypted model weights is unclear. If a government demands access to the weights (e.g., for national security reasons), Anthropic could be forced to hand over the keys, undermining the security guarantees.

AINews Verdict & Predictions

The split-custody architecture is a brilliant but risky innovation. It solves the fundamental problem of model theft in cloud environments, but it creates a new set of dependencies that could prove fragile.

Prediction 1: Anthropic will open-source its HSM interface specification within 12 months. This will allow third-party HSM providers to compete, reducing the single point of failure risk and lowering costs. Expect to see startups like Fortanix and Thales offering compatible HSMs.

Prediction 2: The latency overhead will be reduced to <50 microseconds within 18 months. Advances in GPU memory encryption and key caching will make the overhead negligible for most use cases, enabling adoption in latency-sensitive applications.

Prediction 3: A major cloud provider will attempt to bypass the split-custody model by offering its own encrypted inference service with provider-controlled keys. This will create a price war, but Anthropic's security guarantees will command a premium for high-stakes applications (e.g., healthcare, finance).

Prediction 4: By 2027, the split-custody model will become the default for all frontier AI models, with OpenAI and Google DeepMind adopting similar architectures. The era of open-weight frontier models will end, as the economic incentives to protect weights become overwhelming.

What to Watch Next: Monitor the availability of Anthropic's HSM infrastructure. Any outage—even a brief one—will trigger a crisis of confidence and accelerate efforts to diversify key management. Also, watch for the emergence of 'key escrow' services that allow enterprises to hold a copy of the decryption key for business continuity purposes.

More from Hacker News

RAG vs 파인튜닝: 기업 AI 배포의 전략적 분기점Enterprise AI deployment has reached a critical inflection point where the choice between Retrieval-Augmented Generation오픈소스 가이드, LLM 훈련 민주화로 AI 권력 구조 재편The release of a complete, open-source guide for training large language models from scratch marks a definitive shift in오픈AI의 40억 달러 배포 전환: AI 산업화가 본격화되다OpenAI's creation of The Deployment Company, backed by a $4 billion war chest, represents a watershed moment in the AI iOpen source hub2912 indexed articles from Hacker News

Related topics

Anthropic136 related articles

Archive

May 2026557 published articles

Further Reading

Vertex AI의 Claude Mythos: 기업용 멀티모달 추론 시스템의 조용한 출시Anthropic의 Claude Mythos 모델이 Google의 Vertex AI 플랫폼에서 조용히 비공개 프리뷰를 시작했습니다. 이는 단순한 통합을 넘어, 원시 능력과 함께 안전성과 거버넌스를 우선시하는 기업용 Anthropic과 FIS, 자금세탁 방지 AI 에이전트 출시: 은행 규정 준수 혁명 시작Anthropic과 FIS가 은행의 금융 범죄 탐지 및 대응을 위한 특화 AI 에이전트를 공동 개발 중입니다. 이는 기존 규칙 기반 엔진에서 자율 추론 AI로의 패러다임 전환을 의미하며, 비용 절감과 규제 효율성 향AI 거품은 터지지 않는다: 잔혹한 가치 재조정이 산업을 재편하다AI 거품은 터지는 것이 아니라 격렬하게 재조정되고 있습니다. 당사의 분석에 따르면 기업 API 수익은 예상을 뛰어넘어 급증하고 있으며, 추론 비용은 기하급수적으로 하락하고 있습니다. 진짜 위험은 업계 붕괴가 아니라도킨스 vs 클로드: AI 의식인가, 디지털 진화의 다음 도약인가?진화생물학자 리처드 도킨스와 Anthropic의 클로드가 단순한 AI 시연을 넘어선 대화를 나누었습니다. AINews는 이 대화가 중요한 임계점을 알리는 신호로 분석합니다. 대규모 언어 모델이 이제 재귀적 자기 성찰

常见问题

这次公司发布“Encrypted Weights and Split Keys: The Secret Architecture Behind Cloud-Hosted Anthropic Models”主要讲了什么?

For months, the developer community has debated whether AWS Bedrock and Google Vertex AI are merely intelligent proxies or direct hosts for Anthropic's Claude models. AINews has co…

从“How does Anthropic's split-custody architecture work on AWS Bedrock?”看,这家公司的这次发布为什么值得关注?

The split-custody architecture is not a simple encryption wrapper; it is a carefully engineered system that balances security, latency, and operational complexity. At its core, the design relies on three key components:…

围绕“What is the latency overhead of encrypted model weights?”,这次发布可能带来哪些后续影响?

后续通常要继续观察用户增长、产品渗透率、生态合作、竞品应对以及资本市场和开发者社区的反馈。