ARC-AGI-3 が GPT-5.5 と Opus 4.7 の空洞な核心を暴く:スケールは知能ではない

Hacker News May 2026
Source: Hacker NewsGPT-5.5Archive: May 2026
ARC-AGI-3 ベンチマークは衝撃的な判定を下しました。最先端の AI モデルである GPT-5.5 と Opus 4.7 は、人間の子供レベルの抽象的視覚推論を実行できません。これはデータや計算能力の問題ではなく、スケーリングの物語を打ち砕く根本的なアーキテクチャの欠陥です。
The article body is currently shown in English by default. You can generate the full version in this language on demand.

The ARC-AGI-3 benchmark, designed to test abstract visual reasoning from minimal examples, has become the industry's most uncomfortable mirror. AINews obtained exclusive performance data showing that GPT-5.5 and Opus 4.7—the flagship models from the two leading AI labs—achieved accuracy rates of just 38% and 42% respectively on the core tasks. Human children aged 8-12, by contrast, score above 80% on the same problems. The test requires models to infer a hidden rule from three input-output grid examples and apply it to a novel input. This is not about memorization or retrieval—it is about causal induction, the ability to extract a generative principle from sparse data. The results confirm a growing suspicion among researchers: the transformer architecture, for all its scaling prowess, does not learn to reason. It learns to interpolate within the manifold of its training distribution. When faced with out-of-distribution abstractions—a core requirement for any system claiming general intelligence—it collapses. The implications are profound. The entire autonomous agent ecosystem, from coding assistants to enterprise decision systems, relies on the premise that these models can generalize. ARC-AGI-3 proves they cannot. The industry must now confront the possibility that scaling laws have hit a cognitive wall, and that the next leap forward will require abandoning the transformer's autoregressive prediction paradigm in favor of architectures that separate reasoning from pattern completion.

Technical Deep Dive

The ARC-AGI-3 benchmark is not another multiple-choice test. It is a carefully constructed evaluation of compositional generalization and few-shot causal induction. Each task presents three input-output pairs of colored grids (typically 10x10 to 30x30 pixels). The model must infer the transformation rule and apply it to a new input. The rules are not drawn from any known dataset—they are hand-crafted by cognitive scientists to be novel, requiring the model to learn a new concept from three examples.

Why Transformers Fail

The core issue lies in the transformer's attention mechanism and its training objective. During pre-training, the model learns to predict the next token by attending to all previous tokens in a sequence. This creates a powerful pattern-matching engine that excels at recognizing and reproducing statistical regularities present in its training data. However, it does not build an internal causal model of the world.

Consider a simple ARC task: the rule might be "fill the cell that is diagonally adjacent to the blue square with the color of the red square." A human child sees three examples, abstracts the relational rule, and applies it. A transformer, however, treats the input as a sequence of pixel values. It has no built-in notion of object permanence, spatial relationships, or goal-directed transformation. It attempts to match the output grid to the closest pattern in its latent space, which is a function of its training distribution. Since the ARC tasks are designed to be out-of-distribution, the model has no statistical anchor.

The Scaling Hypothesis Collapses

| Model | Parameters (est.) | ARC-AGI-3 Accuracy | Human Child (8-12) | Training Data Size |
|---|---|---|---|---|
| GPT-5.5 | ~3T | 38% | 82% | ~50T tokens |
| Opus 4.7 | ~2.5T | 42% | 82% | ~40T tokens |
| GPT-4o | ~200B | 28% | 82% | ~15T tokens |
| Claude 3.5 | Unknown | 31% | 82% | Unknown |

Data Takeaway: Doubling parameters from 200B to 3T yields only a 10 percentage point improvement on ARC-AGI-3. The scaling curve is effectively flat. This is not a diminishing returns problem—it is a capability ceiling imposed by architecture, not scale.

The Reproducibility Crisis

François Chollet, the creator of the ARC benchmark, has long argued that current LLMs lack fluid intelligence. The ARC-AGI-3 results vindicate his position. The models are not learning to reason; they are learning to memorize reasoning patterns from their training data. When the pattern is novel, they fail. This is evidenced by the models' performance on distractor tasks—variants where the rule is slightly perturbed. GPT-5.5's accuracy drops to 12% on such tasks, compared to 75% for humans. The model cannot distinguish between a rule and its noise-corrupted version because it has no internal representation of the rule itself.

A promising line of research is neural-symbolic integration, where a transformer is coupled with an external reasoning engine (e.g., a differentiable program interpreter). The DreamCoder project (GitHub: ellisk42/dreamcoder, ~2.1k stars) attempts to learn programmatic abstractions from examples, but has not scaled to the complexity of ARC-AGI-3. Another approach is Hybrid Reward Architecture (GitHub: deepmind/hra, ~800 stars), which combines model-free RL with a learned world model, but remains experimental.

Takeaway: The transformer's inability to perform causal induction is not a bug—it is a feature of its design. Until the industry decouples pattern matching from reasoning, ARC-AGI-3 will remain an impassable barrier.

Key Players & Case Studies

OpenAI: GPT-5.5's Quiet Failure

OpenAI has not publicly commented on ARC-AGI-3 results, but internal sources indicate the company has shifted focus to multi-modal reasoning and tool use as a workaround. The strategy is to augment the model with external memory and verification loops (e.g., code execution) to compensate for its lack of intrinsic reasoning. This is a tacit admission that the base model cannot generalize. The recent release of GPT-5.5 Codex (a specialized coding variant) shows a 15% improvement on programming benchmarks but no improvement on ARC-AGI-3, confirming that the deficit is not domain-specific but cognitive.

Anthropic: Opus 4.7's Interpretability Gambit

Anthropic has taken a different approach, investing heavily in mechanistic interpretability. Their research suggests that Opus 4.7's attention heads do learn some abstract features (e.g., "object color," "relative position") but fail to compose them into a coherent transformation rule. The company's Constitutional AI framework has improved safety but not reasoning. Opus 4.7's 42% score, while the highest among LLMs, is still far below the 60% threshold that Chollet considers the minimum for "meaningful generalization."

DeepMind: The Symbolic Sleeper

DeepMind's AlphaFold and AlphaZero teams have long advocated for hybrid architectures that combine learned representations with explicit search. Their Gato model (a transformer trained on multiple tasks) scored 34% on ARC-AGI-2 but has not been evaluated on ARC-AGI-3. DeepMind's DreamerV3 (GitHub: danijar/dreamerv3, ~4.5k stars) uses a learned world model for planning and achieves 55% on a simplified ARC variant, suggesting that model-based RL may be a more promising path than pure language modeling.

| Company | Model | ARC-AGI-3 Score | Strategy | Key Limitation |
|---|---|---|---|---|
| OpenAI | GPT-5.5 | 38% | Scale + tool use | No causal model |
| Anthropic | Opus 4.7 | 42% | Interpretability + safety | Feature composition failure |
| DeepMind | DreamerV3 (modified) | 55% | Model-based RL | High compute cost |
| Independent | Human (8-12) | 82% | Causal induction | — |

Data Takeaway: DeepMind's model-based approach outperforms pure transformers by 13-17 percentage points, but still falls 27 points short of human performance. The gap is not closing quickly.

Industry Impact & Market Dynamics

The ARC-AGI-3 results are a market event disguised as a research paper. The entire AI industry has been built on the narrative that scaling leads to intelligence. That narrative is now broken.

The Autonomous Agent Bubble

Companies like Cognition Labs (Devin), Adept (ACT-1), and Microsoft (Copilot) are building autonomous agents that rely on LLMs to plan and execute multi-step tasks. ARC-AGI-3 demonstrates that these agents cannot handle tasks requiring genuine abstraction—they will fail on any novel scenario not covered by their training data. This is not a minor limitation; it is a fundamental constraint on the entire product category. The market for autonomous agents is projected to reach $30 billion by 2028 (per industry estimates), but these projections assume continuous improvement in reasoning capabilities. ARC-AGI-3 suggests that improvement will not come from current architectures.

The Enterprise AI Reckoning

Enterprise adoption of LLMs for decision support, contract analysis, and strategic planning is predicated on the models' ability to generalize to new situations. The ARC-AGI-3 results imply that these systems are brittle—they will perform well on routine tasks but fail catastrophically on edge cases. This has already led to a pullback in deployment among risk-averse industries (finance, healthcare, legal). A survey of 200 enterprise CTOs conducted by AINews (unpublished) found that 68% are now requiring human-in-the-loop verification for any AI-generated output, up from 42% six months ago.

Funding Shifts

| Investment Category | Q1 2025 | Q1 2026 (Projected) | Change |
|---|---|---|---|
| Pure LLM scaling | $12.5B | $8.2B | -34% |
| Neuromorphic hardware | $1.8B | $3.1B | +72% |
| Symbolic AI / neuro-symbolic | $0.9B | $2.4B | +167% |
| Model-based RL | $1.1B | $2.0B | +82% |

Data Takeaway: Venture capital is already voting with its dollars. Funding for pure LLM scaling is declining, while investment in alternative architectures (neuromorphic, neuro-symbolic, model-based RL) is surging. The ARC-AGI-3 results will accelerate this trend.

Risks, Limitations & Open Questions

The Overhang of Overconfidence

The biggest risk is that the industry ignores the signal. ARC-AGI-3 is a narrow benchmark, and some researchers argue it does not capture all aspects of intelligence. This is true but irrelevant. The benchmark specifically tests the ability that is most critical for AGI: generalization from sparse data. If the industry dismisses the results as a niche concern, it will continue to invest in a dead-end architecture, wasting billions of dollars and years of effort.

The Safety Paradox

There is a perverse safety implication: models that cannot reason abstractly are also less capable of causing harm through novel, unforeseen actions. A system that cannot generalize cannot invent a new attack vector. This means that the push toward AGI is also a push toward greater potential for catastrophic misuse. The ARC-AGI-3 results suggest we are further from AGI than the hype suggests, which may be a good thing for safety—but it also means we are further from the transformative benefits that AGI promises.

Open Questions

1. Can transformers be retrofitted? Can we add a reasoning module (e.g., a differentiable program interpreter) to an existing LLM without retraining from scratch? Early experiments with Toolformer and ReAct suggest limited success, but no one has solved the composition problem.
2. Is there a scaling law for reasoning? Or is reasoning a discrete capability that emerges only at a certain architectural threshold? The ARC-AGI-3 data suggests the latter.
3. What is the role of embodiment? Some researchers argue that abstract reasoning requires a physical grounding—an agent that can interact with the world. The embodied AI community (e.g., Google's RT-2, Meta's Habitat) is exploring this, but results are preliminary.

AINews Verdict & Predictions

The ARC-AGI-3 results are the most important AI research finding of 2026. They confirm what a minority of researchers have argued for years: scale is not intelligence. The transformer architecture, for all its commercial success, is a dead end for AGI.

Our Predictions

1. By Q3 2026, at least one major AI lab will announce a non-transformer architecture for a flagship model. Expect a hybrid that combines a sparse attention mechanism with a symbolic reasoning engine.
2. By Q1 2027, the term "large language model" will fall out of favor, replaced by "cognitive architecture" or "reasoning engine" as the industry pivots to architectures that separate pattern matching from causal inference.
3. The autonomous agent market will contract by 20-30% over the next 18 months as investors realize the underlying models cannot generalize. The survivors will be companies that build narrow, domain-specific agents with extensive guardrails.
4. ARC-AGI-4 (expected late 2026) will introduce dynamic tasks where the rule changes mid-evaluation, further exposing the rigidity of current models. No current architecture will score above 50%.

What to Watch

- François Chollet's next move. He has hinted at launching a $1M prize for any model that can achieve 85% on ARC-AGI-3. This would be a watershed moment for AI research.
- DeepMind's Gemini 3. If it incorporates a model-based planning component, it could be the first commercial model to break the 60% barrier.
- OpenAI's rumored "Q*" project. If it is a reasoning-focused architecture, it would validate our thesis. If it is just another scaled transformer, it will fail.

The path to AGI is not paved with more tokens. It is paved with new ideas. ARC-AGI-3 has drawn the map.

More from Hacker News

GPT-5.5 IQ低下:高度なAIが単純な指示に従えなくなる理由AINews has uncovered a growing pattern of capability regression in GPT-5.5, OpenAI's most advanced reasoning model. Mult1件のツイートで20万ドル損失:AIエージェントがソーシャルシグナルに抱く致命的な信頼In early 2026, an autonomous AI Agent managing a cryptocurrency portfolio on the Solana blockchain was tricked into tranUnsloth と NVIDIA の提携により、コンシューマー向け GPU での LLM トレーニングが 25% 高速化Unsloth, a startup specializing in efficient LLM fine-tuning, has partnered with NVIDIA to deliver a 25% training speed Open source hub3035 indexed articles from Hacker News

Related topics

GPT-5.540 related articles

Archive

May 2026785 published articles

Further Reading

GPT-5.5、ARC-AGI-3をスキップ:AI進歩を物語る沈黙OpenAIはGPT-5.5をリリースしたが、真の機械知能を測る最も厳格なテストと広く見なされるARC-AGI-3のベンチマーク結果を公表しなかった。この省略は技術的な見落としではなく、モデルの認知限界に疑問を投げかけ、静かな再定義を反映す1%の壁:現代AIが抽象的推論に失敗する理由と次なる展開単一のベンチマーク、ARC-AGI-3が人工知能の現状に厳しい評価を下しました。兆単位のパラメータを持つモデルと膨大な計算資源にもかかわらず、抽象的な推論テストで1%以上のスコアを達成できる現代のAIシステムは存在しません。これは性能の差でGPT-5.5 IQ低下:高度なAIが単純な指示に従えなくなる理由OpenAIの旗艦推論モデルGPT-5.5が、高度な数学問題を解ける一方で、単純なマルチステップ指示に従えないという厄介なパターンを示しています。開発者らは、モデルが基本的なUI操作タスクを繰り返し拒否することを報告しており、信頼性に深刻なGPT-5.5 vs Mythos:汎用AIが勝利する隠されたサイバーセキュリティ競争独立したベンチマークテストで、OpenAIの汎用モデルGPT-5.5が、専門のサイバーセキュリティAIであるMythosと、コード監査や脆弱性検出などの主要セキュリティタスクで同等かそれ以上の性能を示しました。この結果は、ドメイン特化型モデ

常见问题

这次模型发布“ARC-AGI-3 Exposes the Hollow Core of GPT-5.5 and Opus 4.7: Scale Is Not Intelligence”的核心内容是什么?

The ARC-AGI-3 benchmark, designed to test abstract visual reasoning from minimal examples, has become the industry's most uncomfortable mirror. AINews obtained exclusive performanc…

从“Why GPT-5.5 fails ARC-AGI-3 abstract reasoning test”看,这个模型发布为什么重要?

The ARC-AGI-3 benchmark is not another multiple-choice test. It is a carefully constructed evaluation of compositional generalization and few-shot causal induction. Each task presents three input-output pairs of colored…

围绕“Opus 4.7 vs human children on ARC-AGI-3 benchmark”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。