คลังข้อมูลแบบใช้แล้วทิ้งของ Polynya: เศรษฐศาสตร์ใหม่สำหรับเอเจนต์ AI

Hacker News April 2026
Source: Hacker NewsAI agentsArchive: April 2026
Polynya ได้เปิดตัวสถาปัตยกรรมที่รุนแรงซึ่งถือว่าคลังข้อมูลเป็นทรัพยากรที่ใช้แล้วทิ้งสำหรับเอเจนต์ AI โดยการจำลองข้อมูลไปยัง Apache Iceberg และเปิดใช้ ClickHouse ชั่วคราวเฉพาะเมื่อเอเจนต์เรียกใช้คำสั่งค้นหา โมเดลนี้ช่วยขจัดความสูญเปล่ามหาศาลของโครงสร้างพื้นฐานที่ทำงานตลอด 24 ชั่วโมง
The article body is currently shown in English by default. You can generate the full version in this language on demand.

The fundamental economics of data infrastructure are broken for the age of AI agents. Traditional data warehouses, designed for human-driven, predictable query patterns, run around the clock, consuming significant compute and storage costs even when idle. For AI agents that operate sporadically—triggered by events, user requests, or internal state changes—this model is profoundly wasteful. Polynya's solution is elegantly disruptive: decouple compute from storage entirely. Data is continuously replicated into Apache Iceberg, an open table format that provides ACID transactions, schema evolution, and time-travel capabilities. When an agent needs data, a lightweight ClickHouse instance is provisioned on-demand, executes the query, and is immediately torn down. This 'disposable' approach means agents pay only for the milliseconds of compute they actually use, while data remains persistently available in object storage. Crucially, Polynya also introduces persistent workspaces, allowing agents to maintain state across sessions—a feature that bridges the gap between ephemeral compute and long-term context. The implications are vast: financial trading bots can analyze tick data without maintaining a dedicated cluster; IoT anomaly detectors can spin up analytics only when sensor thresholds are breached; real-time recommendation engines can query user behavior without pre-provisioning capacity. This is not merely an optimization; it is a fundamental rethinking of what a data warehouse should be in a world of autonomous, event-driven systems. Polynya's approach directly challenges the incumbent cloud data warehouse vendors—Snowflake, Redshift, BigQuery—whose pricing models are built on always-on compute. If adopted, it could trigger a wave of disaggregation across the analytics stack, forcing incumbents to offer true zero-scale compute options. The technology is still nascent, but the direction is clear: the future of AI agent infrastructure is ephemeral, pay-per-query, and built on open table formats.

Technical Deep Dive

Polynya's architecture is a masterclass in applying the compute-storage separation principle to its logical extreme. The system comprises three core layers: a replication layer that streams data into Apache Iceberg tables stored on object storage (e.g., S3, GCS); a control plane that listens for agent-triggered events; and a compute fabric that provisions and destroys ClickHouse instances.

The Replication Layer: Polynya uses Change Data Capture (CDC) or batch replication to continuously sync source data into Iceberg. Iceberg's open table format is critical here—it provides snapshot isolation, meaning every write creates a new, immutable version of the table. This allows agents to query a consistent point-in-time view without locking. The format also supports partition evolution and hidden partitioning, which are essential for efficient pruning in columnar storage.

The Control Plane & Event-Driven Compute: When an agent sends a query request, the control plane evaluates the required resources (CPU, memory) based on query complexity and data volume. It then provisions a ClickHouse instance from a pre-warmed pool or boots one from scratch. ClickHouse was chosen for its exceptional single-node performance on analytical queries—it can scan billions of rows per second per node. The instance is configured with the Iceberg table as its data source via ClickHouse's Iceberg table engine (available since v22.3). After the query completes, the instance is terminated, and any intermediate results are returned to the agent or written back to Iceberg.

Persistent Workspaces: A key innovation is the persistent workspace. This is essentially a small, long-lived storage volume (backed by a lightweight database like SQLite or a dedicated Iceberg namespace) that stores agent state—session variables, intermediate results, learned patterns. This allows agents to maintain context across multiple 'disposable' compute sessions. For example, a fraud detection agent can remember a user's historical transaction patterns across queries without keeping a ClickHouse instance alive.

Performance Benchmarks: Early internal benchmarks from Polynya (shared in their technical blog) show compelling results:

| Metric | Traditional Always-On ClickHouse | Polynya Disposable (Cold Start) | Polynya Disposable (Warm Pool) |
|---|---|---|---|
| Query Latency (p50) | 50ms | 1.2s | 180ms |
| Query Latency (p99) | 200ms | 3.5s | 500ms |
| Cost per 1000 queries | $0.50 | $0.02 | $0.03 |
| Idle Cost (24h) | $12.00 | $0.00 | $0.00 |

Data Takeaway: The trade-off is clear: cold starts introduce significant latency (1.2s vs 50ms), but warm pools reduce this to 180ms—acceptable for many agent workloads. The cost savings are dramatic: a 96% reduction in per-query cost and zero idle cost. For latency-sensitive agents (e.g., high-frequency trading), the warm pool approach is necessary; for batch-oriented agents (e.g., daily report generation), cold starts are fine.

Relevant Open-Source Projects: The community can explore similar concepts via:
- Apache Iceberg (github.com/apache/iceberg): The foundational table format. 5,800+ stars. Essential for understanding snapshot isolation and partition evolution.
- ClickHouse (github.com/ClickHouse/ClickHouse): The analytical engine. 38,000+ stars. Its Iceberg table engine is key.
- Trino (github.com/trinodb/trino): An alternative query engine that also supports Iceberg and can be used in a similar disposable pattern.
- Flyte (github.com/flyteorg/flyte): A workflow orchestration platform that can manage the lifecycle of such disposable compute.

Polynya's approach is not entirely new—serverless query services like Athena and BigQuery's dry-run mode offer pay-per-query—but Polynya's tight coupling with agent state and Iceberg's transactional guarantees is unique.

Key Players & Case Studies

Polynya is a relatively new entrant, founded by a team of ex-Infobright and ClickHouse engineers. They have raised a $4.5M seed round from a consortium of angel investors including former CTOs of Snowflake and Databricks. Their current product is in private beta with 12 design partners.

Competitive Landscape:

| Company/Product | Architecture | Pricing Model | Agent Suitability | Key Limitation |
|---|---|---|---|---|
| Polynya | Disposable ClickHouse + Iceberg | Pay-per-query + storage | High (event-driven, stateful) | Cold start latency; limited ecosystem |
| Snowflake | Always-on virtual warehouses | Per-second compute + storage | Low (wasteful for sporadic queries) | Minimum 1-minute billing; no true zero-scale |
| BigQuery | Serverless, auto-scaling | Pay-per-byte scanned | Medium (no state management) | Cost unpredictability; no persistent agent workspaces |
| Databricks SQL | Serverless SQL warehouses | Per-DBU (compute unit) | Low (minimum 2-minute billing) | Expensive for short bursts |
| MotherDuck | Embedded, DuckDB-based | Pay-per-query (beta) | High (lightweight) | Limited to single-node; no Iceberg integration yet |

Data Takeaway: Polynya occupies a unique niche: it combines the cost-efficiency of serverless with the statefulness required by AI agents. Snowflake and BigQuery are optimized for human analysts who run long, complex queries; Polynya is optimized for machines that run many short, simple queries.

Case Study: Financial Trading Bot
A design partner, a mid-sized hedge fund, uses Polynya to power a mean-reversion trading strategy. The agent monitors real-time tick data via Kafka, which is replicated to Iceberg every 100ms. When the agent detects a price anomaly (e.g., a 3-sigma deviation), it triggers a Polynya query to calculate historical volatility and correlation matrices. The query runs on a warm-pool ClickHouse instance, completes in ~200ms, and the agent executes a trade. The fund reports a 70% reduction in data infrastructure costs compared to their previous always-on ClickHouse cluster.

Case Study: IoT Anomaly Detection
A smart building management company uses Polynya to analyze sensor data from 10,000+ IoT devices. Sensors stream temperature, humidity, and occupancy data to Iceberg. An agent runs a lightweight model that triggers a Polynya query only when a sensor reading exceeds a threshold (e.g., temperature > 40°C). The query performs a time-series analysis to determine if the anomaly is a sensor fault or a real event. The company eliminated 85% of their data warehouse costs by not maintaining a 24/7 cluster.

Industry Impact & Market Dynamics

Polynya's model could catalyze a broader shift toward ephemeral analytics—a market that Gartner estimates will grow from $2.1B in 2024 to $8.7B by 2028 (CAGR 33%). This growth is driven by the proliferation of AI agents and edge computing.

Market Disruption:
- Cloud Data Warehouse Vendors: Snowflake and Databricks must respond. Snowflake's recent introduction of 'Serverless Compute Pools' (which auto-suspend after 5 minutes of inactivity) is a step, but still carries a minimum billing of 1 minute. Polynya's sub-second billing is a differentiator. Expect Snowflake to acquire or build a similar 'disposable compute' feature within 18 months.
- Open Table Formats: Iceberg's dominance will solidify. Delta Lake (Databricks) and Hudi (Uber) will need to match Iceberg's snapshot isolation performance for ephemeral workloads. Iceberg's recent adoption by Snowflake, Amazon Athena, and Google BigQuery gives it a network effect advantage.
- ClickHouse Ecosystem: ClickHouse's popularity for real-time analytics will surge. The company recently raised $300M at a $3B valuation, and its Iceberg table engine is a key differentiator. Expect more startups to build on ClickHouse for agent-specific workloads.

Adoption Curve:
| Phase | Timeline | Adoption Drivers | Key Verticals |
|---|---|---|---|
| Early Adopters | 2025-2026 | Cost reduction, event-driven architectures | Fintech, IoT, AdTech |
| Early Majority | 2027-2028 | Agent maturity, ecosystem tooling | E-commerce, Healthcare, Logistics |
| Late Majority | 2029+ | Standardization, incumbent response | Enterprise, Government |

Data Takeaway: The adoption timeline is aggressive but plausible, given the rapid pace of agent deployment. The key bottleneck is not technology but ecosystem maturity—Polynya needs integrations with popular agent frameworks (LangChain, AutoGPT, CrewAI) and orchestration tools (Kubernetes, Airflow).

Risks, Limitations & Open Questions

1. Cold Start Latency: For sub-100ms query requirements (e.g., real-time bidding), cold starts are unacceptable. Polynya's warm pool mitigates this but adds complexity and cost. The trade-off between pool size and latency is non-trivial.
2. Data Consistency: While Iceberg provides snapshot isolation, the replication layer introduces eventual consistency. If an agent queries data that hasn't been fully replicated, it may see stale data. Polynya uses a watermark mechanism to ensure agents only query fully replicated snapshots, but this adds latency.
3. Multi-Tenancy & Security: In a shared environment, provisioning ClickHouse instances per agent raises security concerns—how to isolate agent data and queries? Polynya uses per-instance IAM roles and network policies, but the attack surface is larger than a monolithic warehouse.
4. Vendor Lock-in (Irony): Despite using open formats (Iceberg), the control plane and orchestration logic are proprietary. If Polynya fails, migrating to a different provider may require re-engineering the agent-compute lifecycle.
5. Cost Predictability: Pay-per-query models can lead to cost surprises if an agent goes rogue and issues millions of queries. Polynya offers budget caps and query throttling, but this is an ongoing concern.

AINews Verdict & Predictions

Polynya has identified a genuine pain point and built an elegant solution. The 'disposable data warehouse' concept is not a gimmick—it is a logical extension of serverless computing applied to analytics. We believe this model will become the default for AI agent data access within 3 years.

Our Predictions:
1. Acquisition Target: By Q3 2026, Polynya will be acquired by a major cloud provider (AWS or Google Cloud) for $200-400M. AWS needs a native serverless analytics offering for agents; Polynya fits perfectly into SageMaker or Lambda.
2. ClickHouse's Rise: ClickHouse will become the de facto engine for agent analytics, surpassing DuckDB in this specific niche. DuckDB will remain dominant for embedded analytics (e.g., in-agent processing), while ClickHouse dominates server-side.
3. Iceberg vs. Delta Lake: Iceberg will win the open table format war for ephemeral workloads due to its superior snapshot isolation and broader cloud support. Databricks will be forced to open-source Delta Lake's serverless features.
4. New Pricing Models: Cloud data warehouse pricing will shift from 'per-second compute' to 'per-query + per-byte-stored' within 5 years, mirroring Polynya's model. Snowflake will announce a 'Disposable Warehouse' feature in 2026.

What to Watch:
- The success of Polynya's persistent workspace feature—if agents can maintain rich state across sessions, the value proposition multiplies.
- Integration with LangChain and AutoGPT—if Polynya becomes the default data backend for these frameworks, adoption will explode.
- The response from Snowflake and Databricks—their pricing changes will validate or invalidate the thesis.

Polynya is not just a product; it is a signal that the era of always-on data infrastructure is ending. The future belongs to infrastructure that sleeps until an agent wakes it.

More from Hacker News

เอเจนต์ AI เรียนรู้จากความล้มเหลว: การสะท้อนตนเองรายสัปดาห์นำมาซึ่งความอิสระแบบปรับตัวFor years, AI agents operated as brittle executors: they followed predefined rules, and when something went wrong, a humClaude Opus-4-7 ปะทะ Codex GPT-5-5: สงคราม AI โค้ดดิ้งพลิกโฉมวิศวกรรมซอฟต์แวร์The AI coding assistant landscape has entered a new era. Anthropic's Claude Code Opus-4-7 and OpenAI's Codex GPT-5-5 repจากคำถามสัมภาษณ์ที่รั่วไหล สู่กระแสการเรียนรู้ AI: การเพิ่มขึ้นของการศึกษาเชิงโต้ตอบIn a move that has captivated the AI community, a solo developer took 50 leaked interview questions from major AI companOpen source hub2536 indexed articles from Hacker News

Related topics

AI agents617 related articles

Archive

April 20262634 published articles

Further Reading

AI Agents คือผู้เยี่ยมชมใหม่ของคุณ: ทำไมหน้า Landing Pages ต้องพูดภาษาเครื่องหน้า Landing Pages ในตอนนี้ต้องให้บริการทั้งผู้เยี่ยมชมที่เป็นมนุษย์และ AI Agents กรณีศึกษาการออกแบบใหม่ล่าสุดเผยให้เห็นSemble เปิดซอร์สโค้ดค้นหา: ความแม่นยำระดับ Transformer ด้วยความเร็วแบบ Grep โดยไม่ต้องใช้ GPUSemble ได้เปิดซอร์สไลบรารีค้นหาโค้ดสำหรับเอเจนต์ AI พร้อมกับโมเดลฝังตัวน้ำหนักเบา potion-code-16M โซลูชันนี้ให้ความแม่นยความทรงจำคือคูเมืองแห่งใหม่: เหตุใด AI Agent จึงลืม และเหตุใดจึงสำคัญความหลงใหลในจำนวนพารามิเตอร์ของอุตสาหกรรม AI กำลังบดบังวิกฤตที่ลึกซึ้งกว่า นั่นคือ การสูญเสียความทรงจำ หากไม่มีความทรงจำระบบตั๋วของ Paperclip จัดการความโกลาหลของมัลติเอเจนต์เพื่อการประสานงาน AI ระดับองค์กรPaperclip เปิดตัวระบบการประสานงานแบบใช้ตั๋วสำหรับ AI แบบหลายเอเจนต์ แก้ไขความตึงเครียดหลักระหว่างความยืดหยุ่นและความโกลา

常见问题

这起“Polynya's Disposable Data Warehouses: A New Economics for AI Agents”融资事件讲了什么?

The fundamental economics of data infrastructure are broken for the age of AI agents. Traditional data warehouses, designed for human-driven, predictable query patterns, run around…

从“Polynya disposable data warehouse pricing vs Snowflake”看,为什么这笔融资值得关注?

Polynya's architecture is a masterclass in applying the compute-storage separation principle to its logical extreme. The system comprises three core layers: a replication layer that streams data into Apache Iceberg table…

这起融资事件在“How Polynya uses Apache Iceberg for AI agent state”上释放了什么行业信号?

它通常意味着该赛道正在进入资源加速集聚期,后续值得继续关注团队扩张、产品落地、商业化验证和同类公司跟进。