Hợp nhất ChatGPT và Codex: Cược táo bạo của OpenAI vào Nền tảng AI Agent thống nhất

TechCrunch AI May 2026
Source: TechCrunch AIOpenAIAI agentArchive: May 2026
OpenAI đang lên kế hoạch tích hợp sâu ChatGPT và Codex, báo hiệu sự chuyển hướng từ nhiều sản phẩm độc lập sang một nền tảng AI Agent thống nhất duy nhất. Sự trở lại của Greg Brockman với chiến lược sản phẩm đánh dấu sự thay đổi này, nhằm xóa nhòa ranh giới giữa tương tác ngôn ngữ tự nhiên và phát triển phần mềm.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

OpenAI co-founder Greg Brockman has reassumed control over product strategy, and internal signals point to a major integration project: merging ChatGPT with the Codex programming tool. This is not a minor feature update but a fundamental restructuring of OpenAI's product portfolio. The goal is to create a single, seamless AI agent that can handle everything from casual conversation to complex software engineering—writing, debugging, and even deploying code within the same chat interface. This move reflects a broader industry trend where AI models evolve from specialized tools into general-purpose platforms. For OpenAI, it's a defensive measure against competitors like Anthropic and Google, who are aggressively building their own integrated ecosystems, and an offensive play to capture both consumer and developer markets. The technical hurdles are immense: preserving ChatGPT's conversational fluidity while retaining Codex's precision in code generation requires novel architectures, likely involving dynamic model routing and context-aware execution environments. If successful, this could redefine the AI assistant as a true co-creator, not just a question-answerer. The stakes are high, and the outcome will shape the next phase of the AI platform wars.

Technical Deep Dive

The proposed merger of ChatGPT and Codex is far more complex than simply adding a code interpreter to a chatbot. At its core, it requires a new architectural paradigm: a unified agent that can dynamically switch between conversational reasoning and code execution modes, often within the same thread.

Architecture & Engineering Challenges

OpenAI's current stack likely separates the two products at the inference layer. ChatGPT uses a fine-tuned version of GPT-4 (or GPT-4o) optimized for dialogue, safety, and general knowledge, while Codex—now largely superseded by GPT-4's native coding capabilities—was originally a specialized model fine-tuned on public GitHub repositories. The merger implies a single model or a multi-model orchestration system that can handle both tasks without degrading performance.

One plausible approach is a mixture-of-experts (MoE) architecture with specialized 'expert' modules for conversation and coding, gated by a router that predicts the user's intent. For example, when a user says "Write a Python script to scrape this website," the router activates the coding expert; when the user follows up with "Explain how it works," it switches to the conversational expert. This is similar to the architecture behind GPT-4 (as speculated in the rumor mill), but applied at the product level rather than the model level.

Another critical component is the execution environment. Codex currently runs code in a sandboxed container, returning outputs. For a unified agent, this sandbox must be persistent across turns, allowing the agent to remember variables, import libraries, and even run background processes. This introduces latency and security risks. OpenAI has already experimented with this in ChatGPT's 'Code Interpreter' plugin (now Advanced Data Analysis), but that is a limited, session-based environment. The merged product would need a full-fledged, stateful runtime.

Relevant Open-Source Projects

Developers exploring similar ideas can look at:
- Open Interpreter (GitHub: ~55k stars): An open-source project that lets LLMs run code (Python, JavaScript, Shell) locally. It uses a similar 'write-execute-return' loop but lacks the conversational depth of ChatGPT. Its recent updates focus on improved sandboxing and support for more languages.
- SWE-agent (GitHub: ~15k stars): Developed by Princeton researchers, this system turns LLMs into software engineering agents that can browse repositories, edit files, and run tests. It demonstrates the complexity of autonomous code generation but is still research-grade.
- Aider (GitHub: ~25k stars): A command-line tool for pair programming with LLMs, supporting multi-file edits and git integration. It shows how conversational context can be used for code refactoring.

Performance Benchmarks

The table below compares the current capabilities of ChatGPT (with Advanced Data Analysis) and Codex (as of GPT-4), plus a hypothetical merged system:

| Capability | ChatGPT (GPT-4o) | Codex (GPT-4 Turbo) | Hypothetical Merged Agent |
|---|---|---|---|
| Conversational Fluency (MMLU) | 88.7 | 86.4 | 88.0 (estimated) |
| Code Generation (HumanEval pass@1) | 67.0% | 82.0% | 78.0% (estimated) |
| Multi-turn Code Debugging | Limited | Poor | High (target) |
| Stateful Execution | Session-only | Single-turn | Persistent |
| Latency per turn | ~1.5s | ~2.0s | ~2.5s (estimated) |

Data Takeaway: The merged agent will likely trade off a few percentage points in raw code generation accuracy for vastly improved multi-turn interaction and stateful execution. The latency increase is a concern but can be mitigated with speculative decoding and caching.

Key Players & Case Studies

Greg Brockman's Return

Brockman's return to product strategy is the most telling signal. As OpenAI's first CTO and later President, he was instrumental in shaping the company's early product vision—from the API to ChatGPT. His departure from day-to-day product management in 2023 coincided with a period of rapid, sometimes chaotic, product launches (GPT-4, plugins, GPTs, Sora). Now, he is back to impose coherence. His track record suggests a focus on simplicity and reliability, which bodes well for the integration.

Competitive Landscape

OpenAI is not alone in this race. Several competitors are pursuing similar unified agent strategies:

| Company/Product | Strategy | Current Status | Key Differentiator |
|---|---|---|---|
| Anthropic (Claude) | 'Computer use' API + artifacts | Claude can control desktop apps and generate code in a side panel | Strong safety focus, longer context |
| Google (Gemini) | Gemini Apps + Project IDX | Gemini can generate and run code in Colab-like environments | Deep integration with Google Cloud and Workspace |
| GitHub Copilot | Workspace + Copilot Chat | Integrated into VS Code, but limited to coding tasks | Best-in-class IDE integration, but not a general assistant |
| Replit | Replit Agent | Full-stack app generation from natural language | End-to-end deployment, but less conversational |

Data Takeaway: OpenAI's advantage lies in its massive user base (over 400 million monthly active users for ChatGPT) and brand recognition. However, Google and Anthropic are closing the gap by offering more specialized integrations (Google Cloud, safety features). The merged agent could be OpenAI's 'moat'—a product that is hard to replicate because it combines consumer appeal with developer utility.

Case Study: Replit Agent

Replit's Agent, launched in late 2024, is the closest existing product to OpenAI's vision. It allows users to describe an app in natural language, and the agent writes the code, sets up the environment, and deploys it. However, it lacks deep conversational abilities—you cannot ask it to explain its reasoning or engage in a philosophical debate. OpenAI's merged agent would aim to be a superset: a Replit Agent that can also write poetry, plan a vacation, and then build a travel app for it.

Industry Impact & Market Dynamics

Market Size & Growth

The AI coding assistant market was valued at approximately $2.5 billion in 2024 and is projected to grow to $12 billion by 2028 (CAGR ~37%). The broader AI agent market is even larger, estimated at $5 billion in 2024, with expectations to reach $30 billion by 2030. By merging the two, OpenAI is positioning itself to capture a disproportionate share of both markets.

Business Model Implications

Currently, OpenAI charges separately for ChatGPT Plus ($20/month) and the API (usage-based). A merged product could justify a premium tier—perhaps $50–$100/month for a 'Pro' plan that includes persistent code execution, higher usage limits, and deployment capabilities. This would directly compete with GitHub Copilot ($10–$39/month) and Replit ($25–$100/month).

Funding & Valuation Context

OpenAI's last funding round (October 2024) valued the company at $157 billion, with a focus on AGI development. Investors are increasingly demanding clear monetization paths. A unified agent platform that serves both consumers and developers is a concrete path to revenue diversification beyond API tokens and subscriptions. If the merged product captures even 10% of the AI coding market, that's $1.2 billion in additional annual revenue by 2028.

Data Table: Revenue Potential

| Scenario | ChatGPT Users (M) | Conversion to Merged Agent | ARPU ($/month) | Annual Revenue ($B) |
|---|---|---|---|---|
| Conservative | 400 | 5% | 30 | 7.2 |
| Moderate | 400 | 10% | 50 | 24.0 |
| Aggressive | 400 | 20% | 75 | 72.0 |

Data Takeaway: Even conservative estimates show significant upside. The key variable is conversion rate, which depends on how well the integration works and whether it offers clear value over separate tools.

Risks, Limitations & Open Questions

Technical Risks

1. Context Collapse: A single model handling both casual chat and precise code generation may suffer from 'mode confusion,' where it starts writing code when asked a simple question, or becomes too verbose when generating code. Fine-tuning and prompt engineering can mitigate this, but it's a delicate balance.
2. Security & Sandboxing: Allowing a chat agent to execute arbitrary code persistently raises serious security concerns. A malicious user could craft a prompt that escapes the sandbox. OpenAI's existing sandbox is robust, but a stateful, long-running environment increases the attack surface.
3. Latency & Cost: Running a stateful code execution environment for millions of users is expensive. Each turn may require spinning up a container, loading dependencies, and running inference. OpenAI will need to optimize aggressively or pass costs to users.

Strategic Risks

1. Alienating Developers: Codex's current users—professional developers—may resist a product that feels 'dumbed down' by conversational features. They want speed and precision, not chit-chat. OpenAI must ensure the developer experience remains first-class.
2. Regulatory Scrutiny: A unified agent that can write and deploy code could be used to generate malware, automate cyberattacks, or create deepfakes at scale. Regulators may impose stricter oversight on such platforms.
3. Competitive Response: If OpenAI succeeds, expect Google to accelerate its Gemini integration with Google Cloud's App Engine, and Anthropic to partner with a major IDE vendor (e.g., JetBrains). The window of advantage may be narrow.

Open Questions

- Will the merged agent support third-party plugins or remain a walled garden?
- How will OpenAI handle versioning—will old Codex endpoints remain available?
- What happens to the ChatGPT API? Will it be merged with the Codex API?

AINews Verdict & Predictions

Our Editorial Judgment: This is the most strategically important product decision OpenAI has made since launching ChatGPT. It is a bet that the future of AI is a single, omnipresent agent that handles all cognitive tasks—from writing a grocery list to deploying a microservice. If it works, it will be the 'iPhone moment' for AI agents. If it fails, it will be a costly distraction.

Specific Predictions:

1. By Q3 2026, OpenAI will launch a beta of the merged product under a new name (likely 'OpenAI Agent' or 'GPT-5 Agent'), with a tiered pricing structure starting at $20/month for basic chat+code and $100/month for persistent execution and deployment.
2. By Q1 2027, the product will support multi-modal inputs (images, audio, video) for code generation—e.g., uploading a UI mockup and having the agent write the frontend code.
3. By Q4 2027, OpenAI will open-source a lightweight version of the agent runtime, similar to how they open-sourced Whisper and CLIP, to build an ecosystem of third-party 'agent skills.'

What to Watch Next:
- Watch for job postings at OpenAI for 'Agent Runtime Engineers' and 'Sandbox Security Specialists.'
- Monitor the GitHub activity of Open Interpreter and SWE-agent—if OpenAI hires their maintainers, it's a strong signal.
- Pay attention to pricing changes for the ChatGPT API and Codex API; a price hike for standalone APIs would indicate a push toward the merged product.

The era of specialized AI tools is ending. The era of the universal AI agent is beginning. OpenAI is placing its biggest bet yet.

More from TechCrunch AI

Sụp đổ niềm tin: Uy tín của Sam Altman trở thành trọng tâm trong phiên tòa OpenAIIn the final stretch of the high-profile lawsuit between Elon Musk and OpenAI, the courtroom's focus has pivoted from coAI Là 'Con Voi Trong Phòng' Tại Lễ Tốt Nghiệp 2026 — Đây Là Lý Do Không Ai Dám Nhắc ĐếnAcross hundreds of university commencements this spring, a quiet but firm directive has circulated among speechwriters aArXiv Cấm Bài Báo Do AI Tạo Ra: Kỷ Nguyên Mới Cho Tính Liêm Chính Học ThuậtIn a decisive move to protect scientific integrity, ArXiv has announced a new policy that will ban authors for one year Open source hub66 indexed articles from TechCrunch AI

Related topics

OpenAI120 related articlesAI agent128 related articles

Archive

May 20261842 published articles

Further Reading

Lời Đe Dọa Nửa Đêm Của Musk Phơi Bày Sự Chia Rẽ Mã Nguồn Mở Trong AI: Phân Tích Của AINewsCác tài liệu tòa án mới được công bố cho thấy Elon Musk đã gửi lời đe dọa vào nửa đêm tới Sam Altman và Greg Brockman củOpenAI Rút Lùi Khỏi Tính Năng Giỏ Hàng ChatGPT: Tại Sao Các Tác Nhân AI Vật Lộn Với Thương Mại Thực TếOpenAI đã thu hẹp đáng kể tính năng đầy tham vọng 'Thanh Toán Tức Thì' nhằm biến ChatGPT thành giao diện mua sắm trực tiPhiên tòa Altman vs. Musk kết thúc: Khủng hoảng thực sự là quản trị AI, không phải mối thù cá nhânPhiên tòa được chú ý giữa Sam Altman và Elon Musk đã kết thúc, nhưng câu hỏi cốt lõi vẫn chưa được giải đáp: ai canh gácOpenAI Phá Vỡ Sự Độc Quyền Đám Mây Của Microsoft, Thỏa Thuận Với AWS Tái Định Hình Cục Diện Quyền Lực AIOpenAI đã thoát khỏi sự phụ thuộc độc quyền vào đám mây Microsoft Azure, giành được quyền bán các mô hình của mình trên

常见问题

这次公司发布“ChatGPT and Codex Merge: OpenAI's Bold Bet on a Unified AI Agent Platform”主要讲了什么?

OpenAI co-founder Greg Brockman has reassumed control over product strategy, and internal signals point to a major integration project: merging ChatGPT with the Codex programming t…

从“What is the difference between ChatGPT and Codex after the merger?”看,这家公司的这次发布为什么值得关注?

The proposed merger of ChatGPT and Codex is far more complex than simply adding a code interpreter to a chatbot. At its core, it requires a new architectural paradigm: a unified agent that can dynamically switch between…

围绕“How will the ChatGPT and Codex merge affect developers?”,这次发布可能带来哪些后续影响?

后续通常要继续观察用户增长、产品渗透率、生态合作、竞品应对以及资本市场和开发者社区的反馈。