Osiem lat w budowie: przepisanie biblioteki krzywizny PyTorch może zmienić optymalizację głębokiego uczenia

Hacker News May 2026
Source: Hacker NewsArchive: May 2026
Samotny programista open-source spędził osiem lat na przepisywaniu biblioteki optymalizacji krzywizny dla PyTorch, dostarczając wersję, która zmniejsza zużycie pamięci i przyspiesza obliczenia. Aktualizacja przybliża optymalizację drugiego rzędu —długoletnią teoretyczną obietnicę— do rzeczywistego wdrożenia, oferując ogromny potencjał.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

After nearly a decade of iterative work, a dedicated open-source developer has released a complete rewrite of a PyTorch curvature-aware optimization library. The new version addresses the two primary barriers that have kept second-order optimization methods like K-FAC (Kronecker-Factored Approximate Curvature) on the sidelines: prohibitive memory consumption and slow per-iteration computation. Early benchmarks show memory reductions of up to 60% and speed improvements of 2-3x compared to the previous version, bringing the performance of methods like K-FAC within striking distance of popular first-order optimizers such as Adam and SGD.

The significance extends beyond a single library. As the AI industry grapples with the escalating costs of training frontier models—some exceeding $100 million per run—any technique that reduces the number of iterations or improves sample efficiency is a major prize. Second-order methods, which use the curvature of the loss landscape to take more informed steps, have long promised faster convergence and better generalization. But their computational overhead has made them impractical for the scale of modern deep learning. This rewrite systematically tackles those bottlenecks through algorithmic innovations: smarter matrix factorizations, reduced-precision storage for curvature approximations, and a redesigned communication pattern for distributed training.

The developer’s journey is a testament to the power of sustained open-source contribution. Working largely alone, they have navigated the rapid evolution of PyTorch itself, from its early days through the introduction of torch.compile, distributed data-parallel, and FSDP. The new library is designed to integrate seamlessly with these modern PyTorch features, making it drop-in compatible for existing training pipelines. This update is not just a technical milestone; it is a signal that the deep learning community’s hunger for more efficient training tools is being met by grassroots innovation.

Technical Deep Dive

The core of the rewrite lies in a fundamental rethinking of how curvature information is computed and stored. The library implements a variant of K-FAC, which approximates the Fisher Information Matrix—a second-order measure of parameter sensitivity—using Kronecker products. The original version stored these approximations as dense matrices, leading to memory footprints that scaled quadratically with layer size. The new version introduces a block-diagonal decomposition with adaptive rank reduction, storing only the dominant eigenvalues and eigenvectors. This reduces memory from O(n²) to O(n·k), where k is a tunable rank parameter, typically set to 10-20% of the layer dimension.

On the computation side, the rewrite leverages PyTorch’s torch.compile with custom Triton kernels for the curvature matrix-vector products. These kernels are fused and avoid materializing intermediate tensors, cutting GPU kernel launch overhead by roughly 40%. The library also introduces a novel 'lazy curvature update' schedule: instead of recomputing the curvature at every step, it updates it every T iterations (default T=10), with an exponential moving average to smooth transitions. This alone yields a 5x reduction in per-iteration overhead without measurable loss in convergence quality.

Benchmark Performance (measured on a single NVIDIA A100 for a ResNet-50 on ImageNet):

| Metric | Old Version | New Version | Improvement |
|---|---|---|---|
| Memory per GPU (batch 256) | 8.2 GB | 3.1 GB | 62% reduction |
| Time per iteration | 420 ms | 180 ms | 57% faster |
| Steps to 75% validation accuracy | 38,000 | 31,000 | 18% fewer steps |
| Final validation accuracy (90 epochs) | 76.3% | 77.1% | +0.8% |

Data Takeaway: The new version achieves a dramatic reduction in memory and per-iteration time while also improving convergence speed and final accuracy. This combination makes second-order optimization genuinely competitive with Adam for the first time at scale.

The library is available on GitHub under the repository `pytorch-curvature-optimizer` (currently 2,300 stars, up from 800 before the rewrite). The developer has also contributed a set of example scripts for training GPT-2 scale language models (125M parameters) and Vision Transformers, showing that the method scales to modern architectures.

Key Players & Case Studies

While this is a solo effort, the work builds on foundational research from several groups. The original K-FAC algorithm was developed by James Martens and Roger Grosse (2015), with later extensions for deep networks by the University of Toronto and DeepMind teams. The current developer has cited the work of Yann Dauphin (Facebook AI Research) on Hessian-free optimization and the 'Newton-CG' methods as key inspirations.

Comparison of Second-Order Optimizers in Practice:

| Optimizer | Memory Overhead (vs Adam) | Per-Iteration Cost | Convergence Speed | Maturity |
|---|---|---|---|---|
| Adam (baseline) | 1x | 1x | 1x | Production-ready |
| K-FAC (old) | 4-8x | 5-10x | 0.7x steps | Research-only |
| K-FAC (new) | 1.5-2x | 1.5-2x | 0.8x steps | Experimental |
| Shampoo | 2-3x | 2-3x | 0.75x steps | Limited adoption |
| Sophia | 1.2x | 1.5x | 0.7x steps | Growing interest |

Data Takeaway: The new K-FAC implementation closes the gap with Adam on memory and speed while maintaining a convergence advantage. It is now more practical than Shampoo or Sophia for large models, though still not as lightweight as Adam.

Several notable companies are watching this development closely. OpenAI has experimented with second-order methods for fine-tuning GPT-4, but found existing implementations too slow. Anthropic has published research on 'curvature-aware' RLHF, suggesting they see value in the approach. Smaller players like Replicate and Hugging Face have expressed interest in integrating the library into their training infrastructure, as it could reduce their cloud compute bills by 15-30%.

Industry Impact & Market Dynamics

The timing of this rewrite is critical. The global AI training infrastructure market is projected to reach $120 billion by 2027, with compute costs accounting for 60-70% of total expenditure. Any optimization that reduces training time by 10-20% translates to billions in savings. Second-order methods have been a 'holy grail' for decades, but this rewrite may finally make them viable for production.

Adoption Scenarios and Cost Impact:

| Scenario | Training Cost (Current) | With New Optimizer | Savings |
|---|---|---|---|
| Fine-tune LLaMA-3 70B | $2.5M | $2.0M | $500K |
| Train GPT-5 scale (1.8T params) | $200M | $160M | $40M |
| Monthly inference fine-tuning (Meta) | $50M | $40M | $10M |

Data Takeaway: Even conservative adoption could save major AI labs tens of millions per training run. For startups with limited budgets, the savings could be the difference between viability and failure.

The library’s open-source nature means it will likely be adopted first by the research community, then by startups, and finally by hyperscalers. Google has its own internal second-order optimizer (based on Shampoo), but the PyTorch ecosystem is larger and more diverse. If this library gains traction, it could pressure Google to open-source more of its optimization research.

Risks, Limitations & Open Questions

Despite the impressive gains, several challenges remain. First, the library currently only supports dense layers and convolutional layers. Transformers with attention mechanisms and embedding layers are not yet optimized, though the developer has indicated this is next. Second, the hyperparameter sensitivity of K-FAC is higher than Adam—the damping parameter and update frequency require tuning, which could be a barrier for non-experts. Third, the library has not been tested at the extreme scale of 1000+ GPUs; distributed communication patterns for curvature matrices are non-trivial and could become bottlenecks.

There is also a theoretical limitation: K-FAC assumes the Fisher information matrix is block-diagonal, which is an approximation that may fail for highly entangled architectures like mixture-of-experts or models with massive cross-layer dependencies. The developer acknowledges this and is exploring non-Kronecker approximations.

From an ethical standpoint, the primary risk is that more efficient training could accelerate the development of powerful AI systems without adequate safety testing. However, this is a general concern for any optimization improvement, not specific to this library.

AINews Verdict & Predictions

This rewrite is a landmark moment for deep learning optimization. It proves that a single dedicated developer can still move the needle on foundational infrastructure, challenging the narrative that only big labs can make progress. The library is not yet production-ready for the largest models, but it is close.

Our predictions:
1. Within 12 months, this library will be integrated into at least one major open-source training framework (e.g., Hugging Face Transformers or Lightning AI).
2. By 2027, second-order methods will account for 15-20% of all large model training runs, up from less than 1% today.
3. The developer will either be hired by a major AI lab or will receive significant grant funding to continue this work—their skills are too rare to remain independent.
4. The next frontier will be applying these techniques to reinforcement learning from human feedback (RLHF), where curvature information could dramatically improve reward model training.

What to watch: The developer’s next release, expected in Q3 2026, will focus on transformer support. If that delivers similar gains, the era of Adam dominance may finally be ending.

More from Hacker News

AI znajduje pierwszy exploit układu M5: Claude Mythos przełamuje pamięciową fortecę AppleIn a landmark event for both artificial intelligence and hardware security, researchers using Anthropic's Claude Mythos Idealne twarze AI zmieniają chirurgię plastyczną — i nie na lepszeA new phenomenon is sweeping the cosmetic surgery industry: patients are bringing AI-generated selfies — often created uNadmiar Mocy Obliczeniowej AI: Jak Bezczynny Sprzęt Przekształca BranżęThe era of AI compute scarcity is ending. Over the past 18 months, hyperscalers and GPU-rich startups have deployed hundOpen source hub3509 indexed articles from Hacker News

Archive

May 20261778 published articles

Further Reading

Ghost Pepper kwestionuje dominację chmurowej AI dzięki lokalnemu rozpoznawaniu mowy i podejściu priorytetowo traktującemu prywatnośćNa urządzeniach z systemem macOS rozgrywa się cicha rewolucja w interakcji człowiek-komputer. Ghost Pepper, aplikacja opJednorazowa obrona wieży: jak generowanie gier przez AI na nowo definiuje rozwójEksperyment dewelopera trwający 33 dni zaowocował grą typu tower defense wygenerowaną z pojedynczego polecenia, pokazująOgólnokrajowe wdrożenie ChatGPT Plus na Malcie: pierwszy kraj napędzany AI rozpoczyna nową eręMalta podpisała historyczne porozumienie z OpenAI, aby zapewnić każdemu obywatelowi subskrypcję ChatGPT Plus, stając sięSANA-WM: Jak model open-source z 2,6 miliarda parametrów przełamuje barierę 1-minutowego wideoNowy model świata open-source, SANA-WM, z zaledwie 2,6 miliarda parametrów, generuje minutowe filmy w jakości 720p na po

常见问题

GitHub 热点“Eight Years in the Making: PyTorch Curvature Library Rewrite Could Reshape Deep Learning Optimization”主要讲了什么?

After nearly a decade of iterative work, a dedicated open-source developer has released a complete rewrite of a PyTorch curvature-aware optimization library. The new version addres…

这个 GitHub 项目在“How to use PyTorch curvature optimizer for fine-tuning LLMs”上为什么会引发关注?

The core of the rewrite lies in a fundamental rethinking of how curvature information is computed and stored. The library implements a variant of K-FAC, which approximates the Fisher Information Matrix—a second-order mea…

从“K-FAC vs Adam for large language model training benchmarks 2026”看,这个 GitHub 项目的热度表现如何?

当前相关 GitHub 项目总星标约为 0,近一日增长约为 0,这说明它在开源社区具有较强讨论度和扩散能力。