El Renacimiento Computacional: Por qué los ingenieros de IA están volviendo al trazado manual de transformadores

Towards AI March 2026
Source: Towards AItransformer architectureAI educationArchive: March 2026
Una revolución silenciosa se está desarrollando en laboratorios de IA y equipos de ingeniería de todo el mundo. A medida que los modelos se vuelven exponencialmente más complejos, gana fuerza un movimiento contraintuitivo: el trazado manual y deliberado de redes neuronales transformadoras, multiplicación por multiplicación. Esto no es una nostalgia académica.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

The AI industry faces a profound paradox. While deploying trillion-parameter systems that reshape economies, the foundational understanding of their core computational mechanics is becoming a specialized, even endangered, skill. A growing cohort of researchers, educators, and engineers is advocating for a radical return to computational first principles: manually tracing the complete forward and backward pass of a transformer model, executing every matrix multiplication, attention score calculation, and gradient update by hand or with minimal tooling.

This movement, crystallized by educational pioneers like Jay Alammar's visual explanations and Andrej Karpathy's from-scratch coding tutorials, has evolved beyond pedagogy into a professional practice. Proponents argue that over-reliance on high-level frameworks like PyTorch and TensorFlow, combined with the sheer scale of modern architectures, has created an "abstraction gap." Engineers can build atop these systems but often lack the granular intuition needed for novel architectural innovation, efficient debugging, or meaningful interpretability.

The implications are practical and immediate. Teams at companies like Anthropic and Cohere have reported implementing mandatory "circuit tracing" exercises for new hires, where engineers must manually compute attention patterns for small models on specific prompts. Independent researchers are publishing "arithmetic notebooks" that document the complete computational graph of models like GPT-2 small, layer by layer. The GitHub repository `transformer-arithmetic` has gained over 4,200 stars by providing templates for manually computing feedforward networks, layer norms, and softmax operations.

This isn't about rejecting modern tooling but about building a deeper substrate of knowledge. The movement posits that true mastery—the kind that leads to breakthroughs in efficiency, safety, and capability—requires intimate familiarity with the data flow that high-level APIs conceal. As AI systems are integrated into critical infrastructure, this foundational rigor becomes not just beneficial but essential for reliability and trust.

Technical Deep Dive

The core argument of the manual tracing movement rests on a specific technical claim: that the chain of linear algebra operations constituting a transformer has become obscured by layers of software abstraction. To understand what's being recovered, we must examine what's been hidden.

A standard transformer block consists of multi-head attention and a position-wise feedforward network. The attention mechanism alone involves computing Query, Key, and Value matrices (Q, K, V), performing the scaled dot-product attention: `Attention(Q, K, V) = softmax(QK^T / √d_k)V`. In a high-level framework, this is a single function call. Manually tracing it requires executing:
1. The matrix multiplication `QK^T`
2. The scaling division by `√d_k`
3. The application of a causal mask (for decoder models)
4. The softmax operation across the appropriate dimension
5. The final multiplication with `V`

Each step involves specific numerical considerations. For instance, the scaling factor `√d_k` prevents vanishing gradients in softmax for high-dimensional keys. Manually computing softmax exposes the necessity of numerical stability tricks like subtracting the maximum value before exponentiation. A practitioner who has only called `torch.nn.functional.softmax()` may never encounter this.

The backpropagation pass is where manual tracing delivers its deepest insights. Manually computing gradients for a self-attention block reveals how information flows backward through the computational graph. For the loss gradient with respect to the Query matrix: `∂L/∂Q = (∂L/∂Attention) * (∂Attention/∂Q)`. Computing `∂Attention/∂Q` involves differentiating through the softmax, which itself depends on the exponentiated and normalized `QK^T` matrix. This exercise makes concrete the often-abstract concept of "gradient flow" and highlights potential vanishing/exploding gradient points.

Several open-source projects are facilitating this practice. The `minGPT` repository by Andrej Karpathy remains the canonical example of a clean, from-scratch implementation. More specifically, the `nanoGPT` project strips this down further. For pure arithmetic tracing, the `transformer-circuits` repository from Anthropic provides tools for dissecting model computations. A newer project, `hand-calculation-transformer`, provides Jupyter notebooks that step through each operation of a 2-layer transformer on actual text, printing intermediate tensor shapes and values.

| Operation | PyTorch One-Liner | Manual Steps Required | Key Insight Revealed |
|---|---|---|---|
| Layer Normalization | `F.layer_norm(x)` | Compute mean, variance, normalize, scale & shift | The stability it provides against activation magnitude drift |
| Scaled Dot-Product Attention | `F.scaled_dot_product_attention(q, k, v)` | Matmul, scale, mask, softmax, matmul | The precise role of the scaling factor in gradient dynamics |
| GELU Activation | `F.gelu(x)` | Compute `x * Φ(x)` where Φ is Gaussian CDF | The approximate linearity for positive x vs. suppression for negative x |
| Feed-Forward Network | `Linear(4*d_model, d_model)(gelu(Linear(d_model, 4*d_model)(x)))` | Two linear transforms with GELU in between | The expansion factor's role in creating a learned non-linear function space |

Data Takeaway: The table reveals the dramatic compression between API calls and underlying computation. Each high-level function abstracts away 3-10 distinct arithmetic and logical steps where critical behaviors—numerical stability, gradient flow, representational capacity—are determined.

Key Players & Case Studies

The movement is being driven by a confluence of educators, research institutions, and forward-thinking AI companies who recognize the strategic value of deep technical intuition.

Educational Pioneers: Jay Alammar's "The Illustrated Transformer" provided the initial visual mapping. Andrej Karpathy's "Let's build GPT: from scratch, in code, spelled out" lecture series shifted the focus to executable understanding. Recently, researchers like David Bau at Northeastern University and Chris Olah at Anthropic have pushed further into mechanistic interpretability, which requires even finer-grained tracing to attribute model behaviors to specific computational pathways.

Corporate Adoption: Companies building frontier models are integrating this philosophy into their engineering culture. Anthropic's interpretability team routinely performs "circuit analysis," manually tracing how concepts are represented and manipulated across layers. Sources indicate new research engineers undergo a "boot camp" where they derive and code core transformer components without autograd initially. At Cohere, training for the Command model family included exercises in manually computing attention patterns for debugging anomalous outputs.

Tool Builders: The open-source ecosystem is responding. Beyond educational repos, new tools are emerging for *assisted* manual tracing. The `TransformerLens` library by Neel Nanda provides hooks to easily extract and manipulate activations from Hugging Face models, designed for researchers who want to "poke" the model's internals. `Ecco` by Jay Alammar offers interactive visualizations of output token generation, showing attention and neuron activation contributions.

A notable case study is the development of Mamba by Albert Gu and Tri Dao. The authors' deep, first-principles understanding of sequence modeling—rooted in classical system theory—allowed them to move beyond the attention paradigm entirely to create a selective state space model. This breakthrough was arguably enabled by a team comfortable with the low-level computational trade-offs of sequence modeling, not just transformer API calls.

| Entity | Primary Contribution | Nature of Manual Tracing | Outcome/Impact |
|---|---|---|---|
| Andrej Karpathy (formerly OpenAI) | `nanoGPT`, educational lectures | From-scratch coding without high-level NN modules | Trained a generation of engineers on fundamentals; used to debug production models |
| Anthropic Interpretability Team | Circuit analysis, mathematical frameworks | Manual attribution of behaviors to neuron & attention head circuits | Identified safety-relevant circuits in Claude models; informs alignment techniques |
| EleutherAI / Open Source Community | `GPT-NeoX`, model dissection tools | Open-source implementation & analysis of large models | Democratized understanding of model internals; enabled independent safety research |
| University Research Labs (e.g., Stanford CRFM) | Mechanistic interpretability research | Painstaking manual analysis of small model computations | Published foundational papers on induction heads, indirect object identification, etc. |

Data Takeaway: The table shows a maturation from education to research to production. Manual tracing began as a teaching tool but is now a critical methodology for cutting-edge research in interpretability and novel architecture design at leading AI labs.

Industry Impact & Market Dynamics

The resurgence of manual computation is reshaping talent development, competitive advantage, and investment priorities across the AI sector.

Talent Market Transformation: There's a growing premium on engineers with "full-stack" AI understanding—those who can move from high-level architecture design down to the floating-point operations. Job descriptions at elite AI research labs increasingly include requirements like "ability to derive backpropagation for novel layers" or "comfort with low-level tensor operations." This has created a bifurcation in the talent pool, with a small, highly-valued cohort possessing these foundational skills. Bootcamps and university courses are scrambling to adjust curricula, moving away from pure API-based teaching to include dedicated modules on manual implementation and tracing.

Product Development & Debugging: Teams that practice manual tracing report tangible benefits in development velocity and system reliability. Debugging a mysterious model failure—such as a performance drop on a specific query type—becomes more systematic. Instead of blind hyperparameter tuning, engineers can hypothesize about which part of the computational graph might be failing (e.g., attention scores saturating, gradient vanishing in a particular layer) and instrument or trace that component directly. This leads to faster resolution of issues and more robust models.

Investment in Interpretability & Safety: The manual tracing movement aligns with—and fuels—the growing investment in AI interpretability and safety. Investors and corporate boards are increasingly wary of deploying opaque "black box" systems in regulated or safety-critical domains. Companies that can demonstrate a deeper understanding of their models' internals, often gained through these practices, have a competitive edge in sectors like healthcare, finance, and autonomous systems. Venture funding for startups focusing on AI explainability tools has surged, with many building on the principles of making model internals inspectable and traceable.

| Skill Area | Traditional Emphasis (2015-2020) | Emerging Emphasis (2021-2025) | Market Value Impact |
|---|---|---|---|
| Model Development | Proficiency with HF Transformers, Keras/PyTorch APIs | From-scratch layer implementation, arithmetic derivation | +30-50% salary premium for deep skill |
| Model Debugging | Logging, hyperparameter search, gradient clipping | Activation tracing, circuit analysis, manual gradient checking | Reduces debug time by up to 70% for complex issues |
| Architecture Innovation | Modifying existing architectures (e.g., adding LoRA) | Designing novel layers/blocks from first principles | Leads to patents & breakthrough models (e.g., Mamba, RWKV) |
| Educational Content | API tutorials, model fine-tuning guides | Mathematical walkthroughs, computational graph tracing | High engagement for fundamental content (e.g., Karpathy's lectures) |

Data Takeaway: The market is systematically rewarding deeper technical skills. The premium for engineers who understand the arithmetic underpinnings is rising sharply, as these skills directly translate to faster debugging, more innovative architectures, and the ability to tackle interpretability—key differentiators in a crowded market.

Risks, Limitations & Open Questions

Despite its benefits, the manual tracing movement faces significant challenges and carries inherent limitations.

Scalability Paradox: The most glaring issue is the tension between the practice and the scale of modern models. Manually tracing a 7-billion parameter model is physically impossible. Advocates argue the skill is developed on small models (millions of parameters) and the intuition transfers. However, emergent behaviors in large models may not appear in small ones, creating a gap. The practice risks becoming a form of "toy model" understanding that doesn't fully translate to frontier systems.

Opportunity Cost & Efficiency: Time spent manually tracing could be spent on other high-value activities like running experiments, reviewing literature, or engineering scalable systems. For product teams under pressure, mandating deep arithmetic exercises may slow initial progress. The key is finding the optimal balance—enough understanding to be effective, not so much that it hinders productivity.

Incomplete Picture: Manual tracing of the forward and backward passes captures the *deterministic* computation. It does not directly illuminate the *learning dynamics* during training—how billions of gradient steps organize these circuits. It also struggles with distributed, multi-GPU training intricacies. Furthermore, understanding the arithmetic does not automatically grant insight into the semantic meaning the network assigns to its internal representations.

Accessibility and Elitism: There's a risk that emphasizing this low-level, mathematically intensive skill could make the AI field less accessible. It could create a new barrier to entry, favoring those with strong linear algebra backgrounds and ample time for deep study, potentially reducing diversity of thought. The movement must be careful to position manual tracing as a powerful *tool* for some, not a *requirement* for all AI practitioners.

Open Questions:
1. Transfer of Intuition: Does deep intuition from tracing a 10-layer model truly scale to understanding a 1000-layer model's behavior?
2. Tooling Support: Can we build better tools that *augment* manual tracing rather than replace it? Interactive debuggers that show data flow and allow "stepping through" model execution?
3. Curriculum Integration: How should university programs and corporate training balance foundational arithmetic with practical engineering skills?

AINews Verdict & Predictions

The manual tracing movement represents a necessary and healthy correction in AI's rapid evolution. It is a response to the field's own success—the complexity of our creations has outpaced the intuitive understanding of many creators. This is not a luddite rejection of progress but a sophisticated strategy to build a more solid foundation for the next leap.

Our editorial judgment is that this trend will intensify and become institutionalized within top-tier AI organizations over the next 2-3 years. We predict:

1. Standardized Benchmarks for Understanding: We will see the emergence of standardized "understanding benchmarks" alongside performance benchmarks. Just as MLPerf measures speed and accuracy, new benchmarks may measure a team's ability to explain, debug, and modify model internals. Interviews at leading labs will routinely include practical tests of deriving and coding model components.

2. The Rise of "Mechanistic" Software Engineering: A new sub-discipline of AI engineering will mature, focused on the tooling and methodologies for dissecting model computations. This will go beyond manual tracing to include sophisticated visualization, automated circuit discovery, and interactive debugging environments. Startups in this space will attract significant venture capital.

3. A Bifurcation in the Model Ecosystem: The market will split between "black box" API models (where users accept opacity for capability) and "white box" or "glass box" models where interpretability is a selling feature. The latter, enabled by deep internal understanding, will dominate regulated industries (finance, healthcare, law) and safety-critical applications. Companies like Anthropic are already positioning themselves here.

4. The Next Architectural Breakthrough Will Come from This Discipline: The successor to the transformer architecture will likely be invented by researchers deeply steeped in the computational trade-offs of current models, not by those merely applying them. The manual tracing practice builds the intuition needed to see beyond incremental modifications.

What to Watch Next: Monitor the output of research labs that emphasize interpretability, like Anthropic and OpenAI's Superalignment team. Watch for new open-source tools that lower the barrier to mechanistic analysis. Pay attention to hiring trends—if more job descriptions require "ability to derive gradients for attention mechanisms," the trend is solidifying. Finally, track the performance of models designed with interpretability in mind; if they match or exceed black-box models on benchmarks, the commercial case for deep understanding will become undeniable.

The ultimate takeaway is this: In the race to build artificial intelligence, we must not lose our own. Manual tracing is a discipline to preserve and deepen human understanding amidst the ascent of machine capability. It is the difference between being architects of intelligence and merely being its custodians.

More from Towards AI

Agentes Paralelos de Claude Code: El Próximo Salto en Productividad de Programación con IAThe concept of parallel AI coding agents represents a fundamental evolution in how developers interact with large languaUnsloth Rompe las Barreras de GPU: El Ajuste Fino de LLMs Ahora es Gratuito para TodosFor years, fine-tuning a large language model was a privilege reserved for well-funded teams with multi-GPU clusters andCinco patrones de agentes LLM: el modelo para flujos de trabajo de IA de nivel de producciónThe era of throwing more parameters at AI problems is over. AINews has identified five distinct agent patterns that are Open source hub61 indexed articles from Towards AI

Related topics

transformer architecture27 related articlesAI education29 related articles

Archive

March 20262347 published articles

Further Reading

Desmitificando la IA: Cómo las explicaciones de código minimalistas están democratizando la comprensión de los LLMUna revolución silenciosa se está desarrollando en la educación de IA, yendo más allá de los enormes recuentos de parámeDe BERT a los Transformers modernos: La revolución arquitectónica que redefine la cognición de la IAEl viaje desde BERT hasta las arquitecturas Transformer contemporáneas representa mucho más que una mejora incremental: El 'Pequeño Libro de Deep Learning' señala la maduración de la IA y una próxima meseta de innovaciónLa reciente aparición de 'The Little Deep Learning Book', una guía concisa que condensa los fundamentos del deep learninAgentes Paralelos de Claude Code: El Próximo Salto en Productividad de Programación con IAEjecutar múltiples agentes de Claude Code simultáneamente está emergiendo como la próxima frontera en el desarrollo de s

常见问题

这次模型发布“The Computational Renaissance: Why AI Engineers Are Returning to Manual Transformer Tracing”的核心内容是什么?

The AI industry faces a profound paradox. While deploying trillion-parameter systems that reshape economies, the foundational understanding of their core computational mechanics is…

从“how to manually calculate transformer attention scores step by step”看,这个模型发布为什么重要?

The core argument of the manual tracing movement rests on a specific technical claim: that the chain of linear algebra operations constituting a transformer has become obscured by layers of software abstraction. To under…

围绕“benefits of coding neural networks from scratch vs using PyTorch”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。