Technical Deep Dive
OpenHands is an agentic coding assistant built on a large language model (LLM) backbone, designed to operate autonomously within a codebase. Its architecture consists of several key components:
- Agent Loop: A continuous cycle where the LLM observes the current state (e.g., code files, terminal output), decides on an action (e.g., edit a file, run a command), executes it, and observes the result. This loop allows the assistant to iteratively solve complex tasks like debugging or feature implementation.
- Sandboxed Execution Environment: OpenHands uses Docker containers to execute commands safely, isolating the assistant from the host system. This is critical for preventing accidental damage or security breaches.
- Codebase Indexing: The tool builds a vector index of the codebase using embeddings (e.g., from OpenAI's text-embedding-3-small or open-source alternatives like BGE). This enables semantic search and context retrieval, allowing the assistant to find relevant code snippets without scanning the entire repository.
- Tool Integration: OpenHands supports a plugin system for tools like linters, test runners, and version control (Git). It can automatically run tests after code changes, commit changes, and even create pull requests.
Together Computer's fork likely modifies several layers:
1. Model Optimization: Together provides inference endpoints for open-source models like Llama 3, Mixtral, and CodeLlama. The fork may be tuned to use these models more efficiently, perhaps with custom prompt templates or fine-tuned adapters for code generation. Given Together's expertise in model serving (they claim 2x faster inference than competitors for certain models), the fork could achieve lower latency and higher throughput.
2. Infrastructure Integration: The fork might be deeply integrated with Together's cloud platform, using their proprietary GPU clusters (e.g., NVIDIA H100s) and networking stack. This could enable features like automatic scaling, persistent storage for code indexes, and seamless deployment to production environments.
3. Cost Optimization: Together's pricing model (e.g., $0.90 per million tokens for Llama 3 70B) is competitive with OpenAI's GPT-4 ($10 per million tokens). The fork could be optimized to minimize token usage through better context management and caching, reducing costs for enterprise customers.
Benchmark Considerations: While no public benchmarks exist for the Together fork, we can compare OpenHands' performance on standard coding tasks against other AI coding assistants. The following table shows hypothetical performance based on available data:
| Assistant | SWE-bench Lite Score | HumanEval Pass@1 | Average Latency (per request) | Cost per 1M tokens (input) |
|---|---|---|---|---|
| OpenHands (default) | 33.2% | 72.5% | 2.3s | $3.00 (GPT-4) |
| GitHub Copilot | 28.1% | 65.8% | 1.1s | $0.15 (proprietary) |
| Cursor (GPT-4) | 35.0% | 74.2% | 1.8s | $3.00 (GPT-4) |
| Codeium | 25.4% | 62.1% | 0.9s | $0.10 (proprietary) |
| Together Fork (estimated) | 34.5% | 73.0% | 1.5s | $0.90 (Llama 3 70B) |
Data Takeaway: The Together fork could offer a compelling balance of performance and cost, potentially surpassing Copilot on benchmarks while undercutting GPT-4-based solutions on price. However, the latency advantage depends on Together's infrastructure optimizations, which remain unverified.
Key Players & Case Studies
Together Computer: Founded in 2022 by former Google Brain and NVIDIA engineers, Together has raised over $200 million from investors including Kleiner Perkins and NEA. They provide cloud infrastructure for training and serving open-source LLMs, with clients including startups and enterprises. Their fork of OpenHands is a natural extension of their strategy to build a vertically integrated AI stack—from hardware to models to applications.
All-Hands-AI: The original creators of OpenHands, a research lab spun out of MIT and Carnegie Mellon. They released OpenHands as open-source under the MIT license, aiming to democratize AI coding tools. The project has over 15,000 GitHub stars and an active community. Together's private fork could be seen as a divergence from this open ethos.
Competing Products:
| Product | Company | Model | Open Source | Pricing Model | Key Differentiator |
|---|---|---|---|---|---|
| GitHub Copilot | Microsoft | Codex (GPT-3.5 derivative) | No | $10/month per user | Deep IDE integration |
| Cursor | Anysphere | GPT-4, Claude 3.5 | No | $20/month per user | Agentic features, multi-file editing |
| Codeium | Codeium Inc. | Proprietary | No | Free tier, $15/month pro | Speed, context-aware suggestions |
| OpenHands | All-Hands-AI | Any LLM (GPT-4, Llama 3) | Yes (MIT) | Free (self-hosted) | Full autonomy, sandboxed execution |
| Together Fork | Together Computer | Llama 3, Mixtral | No (private) | Likely usage-based | Optimized for Together infrastructure |
Data Takeaway: The market is bifurcating between proprietary, tightly integrated products (Copilot, Cursor) and open-source, flexible alternatives (OpenHands). Together's fork occupies a middle ground—proprietary but built on open-source foundations—which could appeal to enterprises wanting customization without managing infrastructure.
Industry Impact & Market Dynamics
The AI coding assistant market is projected to grow from $1.5 billion in 2024 to $27 billion by 2028, according to industry analysts. Key drivers include:
- Developer Shortage: With 50 million developers worldwide and growing demand for software, tools that boost productivity by 30-50% are highly valued.
- Shift to Agentic AI: The move from autocomplete to autonomous agents (like OpenHands) promises to automate entire workflows, not just code snippets.
- Enterprise Adoption: Companies like Google, Meta, and Amazon are deploying internal AI coding assistants to accelerate development.
Together's fork could disrupt this market in several ways:
1. Cost Leadership: By using open-source models and optimized infrastructure, Together could offer coding assistance at a fraction of the cost of GPT-4-based tools. This is critical for price-sensitive startups and enterprises with large developer teams.
2. Data Sovereignty: Enterprises concerned about sending code to third-party APIs (e.g., OpenAI) may prefer a self-hosted solution on Together's cloud, where data remains within their control.
3. Vendor Lock-In: Once enterprises build workflows around Together's fork, switching costs increase. Together can upsell additional services like model fine-tuning, custom agents, and dedicated compute.
Market Share Projections:
| Year | GitHub Copilot | Cursor | Codeium | OpenHands (all forks) | Together Fork |
|---|---|---|---|---|---|
| 2024 | 45% | 15% | 10% | 5% | 0% |
| 2026 | 35% | 20% | 12% | 8% | 5% |
| 2028 | 25% | 22% | 10% | 10% | 12% |
Data Takeaway: Together's fork could capture significant market share by 2028, especially if they leverage their infrastructure to offer superior performance at lower cost. However, this depends on their ability to build a compelling product and ecosystem around the fork.
Risks, Limitations & Open Questions
1. Fragmentation of Open Source: The private fork violates the spirit of open-source collaboration. If other companies follow suit, the OpenHands ecosystem could fragment, reducing the benefits of shared improvements and community support.
2. Lack of Transparency: Without public documentation or community contributions, users cannot audit the fork for security, privacy, or performance. This is a major concern for enterprise adoption, where trust is paramount.
3. Dependency on Together: Enterprises adopting the fork become dependent on Together's infrastructure and pricing. If Together raises prices or discontinues the product, users face migration costs.
4. Model Limitations: Open-source models like Llama 3 still lag behind GPT-4 on complex coding tasks (e.g., multi-file refactoring, understanding legacy code). The fork's performance may not match proprietary alternatives.
5. Ethical Concerns: The fork could be used to automate code generation at scale, potentially displacing junior developers or introducing security vulnerabilities if not properly supervised.
AINews Verdict & Predictions
Verdict: Together Computer's private fork of OpenHands is a strategically sound but ethically ambiguous move. It leverages their core strength—AI infrastructure—to create a differentiated product that could capture enterprise demand for cost-effective, self-hosted coding assistants. However, the lack of transparency and community engagement risks alienating the open-source community and may limit adoption among developers who value openness.
Predictions:
1. Within 12 months, Together will release a public version of the fork with limited documentation, targeting enterprise customers with a freemium model (e.g., free for small teams, paid for larger deployments).
2. Within 24 months, the fork will achieve 5-8% market share among AI coding assistants, driven by cost advantages and integration with Together's broader AI platform.
3. The open-source community will respond by creating a more permissive fork of OpenHands (e.g., under Apache 2.0) that explicitly prohibits private forks, ensuring continued community-driven development.
4. Regulatory scrutiny may increase if the fork is used in critical infrastructure (e.g., healthcare, finance), given the lack of transparency and potential for bias or errors in code generation.
What to Watch: Monitor Together's GitHub activity for signs of public releases or documentation. Also watch for announcements from All-Hands-AI about licensing changes or partnerships that could counter the fragmentation trend.