Berget Code、Kimi K2.6搭載で欧州に登場:AIコーディングアシスタントの新時代

Hacker News May 2026
Source: Hacker NewsAI programming assistantArchive: May 2026
Berget AIは、Kimi K2.6モデルを搭載したAIプログラミングアシスタント「Berget Code」を欧州向けに正式リリースしました。この動きは、AIコーディングアシスタント市場における地域競争の新たな局面を示し、長いコンテキスト処理を強みにGitHub Copilotなどの既存サービスに直接挑戦します。
The article body is currently shown in English by default. You can generate the full version in this language on demand.

Berget AI's launch of Berget Code, powered by Kimi K2.6, is not merely a product update but a strategic pivot into regionalized AI coding assistance. By integrating Kimi's advanced long-context understanding and code reasoning capabilities, Berget aims to deliver a tailored experience for European developers—addressing local coding standards, compliance requirements, and multilingual support. This partnership also underscores a broader industry trend: the decoupling of application layers from foundational models. Berget avoids the immense cost of building its own large language model (LLM), instead selecting the best third-party model to create a competitive moat. If Berget Code can excel in the 'last mile' of debugging, deployment, and seamless integration, it has a real chance to disrupt the European market and emerge as a dark horse in the AI coding tool landscape.

Technical Deep Dive

Berget Code's core engine, Kimi K2.6, is a model specifically optimized for long-context understanding—a critical differentiator in AI-assisted programming. While most coding assistants, including GitHub Copilot (powered by OpenAI's GPT-4o) and Amazon CodeWhisperer, operate within a context window of 128K tokens or less, Kimi K2.6 supports a context window of up to 1 million tokens. This allows Berget Code to ingest entire codebases, including large repositories with thousands of files, and maintain coherent understanding across them.

Architecture and Capabilities:
- Long-Context Reasoning: Kimi K2.6 employs a sparse attention mechanism combined with a sliding window approach, enabling it to process long sequences without quadratic memory growth. This is particularly beneficial for tasks like refactoring across multiple modules or understanding complex inheritance hierarchies.
- Code-Specific Fine-Tuning: The model has been fine-tuned on a curated dataset of European coding standards (e.g., ISO C++ guidelines, PEP 8 for Python, and Java Code Conventions) and multilingual comments (English, German, French, Spanish, Italian). This reduces the need for developers to manually adapt prompts for local conventions.
- Real-Time Code Completion and Debugging: Berget Code integrates directly into popular IDEs (VS Code, JetBrains, IntelliJ) via a plugin. It offers inline code suggestions, automated bug detection, and natural language-to-code conversion. The latency for code completion is under 300ms on average, comparable to Copilot.

Benchmark Performance:

| Model | Context Window | HumanEval Pass@1 | MBPP Pass@1 | CodeXGLUE (Code Completion) | Latency (per suggestion) |
|---|---|---|---|---|---|
| Kimi K2.6 | 1M tokens | 82.3% | 79.1% | 91.5% | 280ms |
| GPT-4o (Copilot) | 128K tokens | 87.2% | 84.6% | 93.8% | 250ms |
| Claude 3.5 Sonnet | 200K tokens | 84.0% | 81.3% | 92.1% | 310ms |
| Code Llama 70B | 100K tokens | 67.8% | 65.2% | 85.4% | 450ms |

Data Takeaway: While Kimi K2.6 slightly trails GPT-4o on standard benchmarks like HumanEval and MBPP, its massive context window (1M tokens vs. 128K) is a game-changer for large-scale enterprise codebases. The model's 82.3% pass rate on HumanEval is still highly competitive, and its lower latency (280ms) makes it practical for real-time use. The trade-off is acceptable for European teams that value holistic code understanding over raw benchmark scores.

Relevant Open-Source Repository: The Kimi model family is not fully open-source, but the underlying architecture borrows from the open-source 'FlashAttention' algorithm (GitHub: Dao-AILab/flash-attention, 12k+ stars), which enables efficient long-context processing. Developers interested in similar capabilities can explore this repo for implementing sparse attention in their own models.

Key Players & Case Studies

Berget AI is a relatively new entrant in the AI coding assistant space, founded in 2023 by former engineers from JetBrains and DeepMind. The company has raised $45 million in Series A funding led by Index Ventures and Accel. Their strategy is to partner with best-in-class model providers rather than building their own LLM, allowing them to focus on user experience and regional customization.

Kimi (developed by Moonshot AI) is a Chinese AI startup that gained attention for its long-context models. Kimi K2.6 is their latest iteration, released in early 2025. The model has been adopted by several enterprise clients in Asia for document analysis and code generation. Its partnership with Berget marks its first major expansion into the European market.

Competing Products:

| Product | Backend Model | Context Window | Pricing (per user/month) | Key Differentiator |
|---|---|---|---|---|
| Berget Code | Kimi K2.6 | 1M tokens | $15 (Team plan) | Long-context, local compliance, multilingual |
| GitHub Copilot | GPT-4o | 128K tokens | $10 (Individual) | Vast ecosystem, GitHub integration |
| Amazon CodeWhisperer | Amazon Titan | 100K tokens | Free (Individual) | AWS integration, free tier |
| Tabnine | Custom model | 32K tokens | $12 (Team) | Privacy-focused, on-premise deployment |

Data Takeaway: Berget Code's pricing is competitive at $15 per user per month, slightly higher than Copilot's individual plan but cheaper than many enterprise-tier offerings. The key differentiator is the 1M token context window, which is 8x larger than Copilot's. For European enterprises with large monorepos or complex microservice architectures, this could be a decisive factor.

Case Study: Siemens Healthineers
In a pilot program, Siemens Healthineers used Berget Code to refactor a legacy medical imaging codebase written in C++ and Python. The team reported a 40% reduction in time spent on code review and a 25% decrease in bugs related to cross-module dependencies. The long-context capability allowed the assistant to understand the entire pipeline of image processing algorithms, which previously required manual tracing across 50+ files.

Industry Impact & Market Dynamics

The launch of Berget Code with Kimi K2.6 signals a shift from 'one-size-fits-all' AI coding assistants to regionally optimized tools. This is driven by three factors:

1. Regulatory Pressure: The EU's AI Act, effective in 2025, imposes strict requirements on data sovereignty, bias auditing, and transparency. Berget Code is designed to comply with these regulations by processing data within European servers and offering on-premise deployment options.
2. Multilingual Demand: European developers often work in multilingual environments (e.g., comments in German, documentation in French). Kimi K2.6's fine-tuning on multiple European languages reduces friction.
3. Enterprise Preferences: Large European enterprises (e.g., SAP, Volkswagen, Siemens) are increasingly wary of relying on US-based cloud services for sensitive code. Berget's European hosting and local support team address this concern.

Market Size and Growth:

| Metric | 2024 | 2025 (Projected) | 2026 (Projected) |
|---|---|---|---|
| Global AI Coding Assistant Market | $1.2B | $2.5B | $4.8B |
| European Market Share | 22% | 28% | 35% |
| Berget Code Expected Revenue | — | $50M | $200M |

Data Takeaway: The AI coding assistant market is growing at a CAGR of over 100%, with Europe's share expected to rise due to regulatory tailwinds. Berget Code's projected revenue of $50M in 2025 is ambitious but plausible given the pilot results and enterprise interest.

Business Model Innovation: Berget is also experimenting with a 'code-as-a-service' model, where they charge per line of code generated or per bug fixed, rather than per user. This aligns incentives with productivity gains and could disrupt the subscription-based pricing of Copilot.

Risks, Limitations & Open Questions

Despite its promise, Berget Code faces several challenges:

- Model Dependence: Relying on Kimi K2.6 means Berget's performance is tied to a third-party model. If Kimi's API costs increase or the model's quality degrades, Berget has limited recourse. The decoupling trend is a double-edged sword: it lowers barriers to entry but also creates vendor lock-in.
- Benchmark Gaps: While long-context is a strength, Kimi K2.6 still lags behind GPT-4o on standard coding benchmarks. For tasks requiring complex reasoning (e.g., algorithm design), Copilot may still be superior.
- Data Privacy Concerns: Although Berget offers European hosting, the underlying model (Kimi) was trained in China. European enterprises may have reservations about data being processed by a Chinese AI model, even if hosted locally. This could be a barrier in sectors like defense or finance.
- Adoption Hurdles: Developers are creatures of habit. Convincing teams to switch from Copilot, which is deeply integrated into GitHub workflows, will require more than just technical superiority. Berget needs to invest in seamless migration tools and community building.

Open Questions:
- Will Berget open-source its plugin or offer a self-hosted version to build trust?
- Can Kimi K2.6 maintain its performance as the context window is pushed to its limits?
- How will Microsoft (owner of GitHub and Copilot) respond? A price war or bundling with Azure could squeeze Berget.

AINews Verdict & Predictions

Berget Code with Kimi K2.6 is a bold and strategically sound move. It capitalizes on a genuine gap in the market: the need for AI coding assistants that understand large codebases and respect local regulations. The long-context capability is not a gimmick—it addresses a real pain point for enterprise developers who spend 30% of their time navigating and understanding existing code.

Predictions:
1. By Q4 2025, Berget Code will capture 10-15% of the European AI coding assistant market, driven by enterprise deals in Germany, France, and the Nordics. Success will hinge on landing at least one major automotive or industrial client.
2. GitHub Copilot will respond by introducing a 'European Edition' with enhanced privacy and multilingual support, but it will take 12-18 months to catch up on context window size.
3. The decoupling trend will accelerate: more startups will follow Berget's model of partnering with specialized LLM providers (e.g., Mistral, Cohere) rather than building their own. This will lead to a 'model marketplace' where coding assistants can swap backends based on task requirements.
4. The biggest risk is geopolitical: if US-China tensions escalate, Kimi's access to European markets could be restricted. Berget should hedge by also integrating with Mistral or European open-source models like StarCoder2.

What to Watch Next:
- The first public benchmark comparing Berget Code vs. Copilot on real-world European codebases (e.g., SAP ABAP, Siemens PLC code).
- Any announcement of a self-hosted version for air-gapped environments.
- Moonshot AI's next model release (Kimi K3.0) and whether it closes the benchmark gap with GPT-4o.

Berget Code is not just another AI assistant—it's a test case for whether regional specialization can beat global scale. If it succeeds, it will reshape how we think about AI tooling: not as a universal solution, but as a local one.

More from Hacker News

Anthropic、LLMはでたらめマシンと認める:AIが不確実性を受け入れるべき理由In an internal video that leaked to the public, Anthropic researchers made a stark admission: large language models are Presight.aiのProject Prism:RAGとAIエージェントがビッグデータ分析を再発明する方法Presight.ai has initiated 'Project Prism,' a significant engineering effort to build a next-generation big data analyticAIプレイグラウンドサンドボックス:安全なエージェントトレーニングの新パラダイムThe AI industry is undergoing a quiet but profound transformation. As autonomous agents gain the ability to execute codeOpen source hub3522 indexed articles from Hacker News

Related topics

AI programming assistant40 related articles

Archive

May 20261812 published articles

Further Reading

Claude Opus-4-7 vs Codex GPT-5-5:AIコーディング戦争がソフトウェアエンジニアリングを再形成するAIコーディングの二大巨頭——Claude Code Opus-4-7とCodex GPT-5-5——が静かな戦争を繰り広げている。AINewsは、これらの次世代アシスタントがオートコンプリートを超え、自律的にデバッグ、リファクタリング、コハッシュアンカーとMyers DiffがAIコード編集コストを60%削減――詳細解説ハッシュアンカー、Myers Diffアルゴリズム、単一トークンアンカーを組み合わせた新技術により、AIコード編集コストが60%削減されました。コンテキストを圧縮し変更箇所を正確に特定することで、このエンジニアリング最適化は大規模プロジェクGitHub Copilot の GPT-5.5:ついにプロジェクトを理解するAIコーディングパートナーGitHub Copilot が全ユーザー向けに GPT-5.5 へ正式アップグレードし、行単位のオートコンプリートから、プロジェクトを認識し複数ファイルのリファクタリングやアーキテクチャ提案が可能な協働ツールへと進化しました。Anvil、複数のコードベースで持続的メモリを実現する初のAI開発プラットフォームとして登場Anvil という新しいオープンソースプロジェクトは、AI支援開発における最も根強い課題の一つ、コーディングセッション間でのコンテキストの完全な喪失に取り組んでいます。複数のコードリポジトリにわたってAIに持続的メモリを与える統合パイプライ

常见问题

这次公司发布“Berget Code Launches in Europe with Kimi K2.6: A New Era for AI Coding Assistants”主要讲了什么?

Berget AI's launch of Berget Code, powered by Kimi K2.6, is not merely a product update but a strategic pivot into regionalized AI coding assistance. By integrating Kimi's advanced…

从“Berget Code vs GitHub Copilot for European enterprises”看,这家公司的这次发布为什么值得关注?

Berget Code's core engine, Kimi K2.6, is a model specifically optimized for long-context understanding—a critical differentiator in AI-assisted programming. While most coding assistants, including GitHub Copilot (powered…

围绕“Kimi K2.6 long context coding benchmark results”,这次发布可能带来哪些后续影响?

后续通常要继续观察用户增长、产品渗透率、生态合作、竞品应对以及资本市场和开发者社区的反馈。