Calculadora de velocidad de LLM local revela que el ancho de banda de memoria es el verdadero cuello de botella de la GPU

Hacker News May 2026
Source: Hacker NewsArchive: May 2026
Una nueva calculadora de velocidad de código abierto predice con precisión la velocidad de inferencia de modelos de lenguaje grandes en GPUs de consumo. Al aprovechar puntos de referencia del mundo real, revela que el ancho de banda de memoria, no la potencia de cálculo bruta, es el cuello de botella dominante, desafiando el dogma de 'más VRAM es mejor' y transformando la IA en el borde.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

For years, developers deploying large language models locally have operated in a frustrating black box. They know their GPU's VRAM capacity, but they cannot reliably predict how fast a 7B or 13B model will actually generate tokens. This uncertainty has led to costly over-provisioning, wasted experimentation, and a general reluctance to move beyond cloud-based inference. A newly released open-source speed calculator, built on a comprehensive dataset of real hardware benchmarks, shatters this opacity. By inputting just the GPU model, quantization precision (4-bit, 8-bit, etc.), and target context length, the tool outputs a precise estimate of tokens per second. Our exclusive analysis of the underlying data reveals a stark truth: for the vast majority of consumer GPUs, memory bandwidth is the invisible ceiling. An RTX 4090, with its blistering 1 TB/s bandwidth, can push a 4-bit quantized 7B model past 100 tokens per second. An RTX 3060, despite having a respectable 12GB of VRAM, is throttled by its 360 GB/s bandwidth to under 30 tokens per second on the same model. The gap is far wider than the raw TFLOPS difference would suggest. As quantization and pruning techniques reduce compute demands, the bottleneck shifts almost entirely to memory bandwidth. This finding directly challenges the industry's 'more VRAM is better' consensus. A high-bandwidth, smaller-VRAM card like the RTX 4060 Ti 16GB can, in many scenarios, outperform a larger-VRAM but bandwidth-starved card like the RTX 3060 12GB. For product innovation, this means the design of edge AI devices—from laptops to embedded systems—must prioritize memory subsystem optimization over raw core count. The calculator's open-source nature empowers startups to pre-empt deployment costs without expensive trial-and-error, accelerating the commercial viability of local AI applications.

Technical Deep Dive

The core insight behind the speed calculator is a deceptively simple formula: Inference Speed (tokens/sec) ≈ (Memory Bandwidth) / (Model Size in bytes per token). This relationship holds because the dominant operation in transformer inference, especially for autoregressive generation, is the matrix-vector multiplication of the key-value cache and the model weights. These operations are memory-bound: the GPU spends most of its time waiting for data to arrive from VRAM, not computing on it.

The calculator's dataset, compiled from thousands of benchmark runs across dozens of GPU models, validates this formula with remarkable precision. It accounts for the overhead of attention mechanisms and the non-linear scaling of context length. The tool is available as a GitHub repository (repo name: `llm-speed-calculator`, currently at 2.3k stars) and includes a Python script that queries a pre-built SQLite database of benchmark results. Users can also contribute their own benchmarks via a standardized testing harness.

Benchmark Data Table: 7B Model, 4-bit Quantization (AWQ)

| GPU Model | Memory Bandwidth (GB/s) | VRAM (GB) | Predicted Tokens/sec | Measured Tokens/sec (avg) |
|---|---|---|---|---|
| RTX 4090 | 1008 | 24 | 115 | 112 |
| RTX 4080 Super | 736 | 16 | 84 | 81 |
| RTX 4070 Ti Super | 672 | 16 | 77 | 74 |
| RTX 3090 | 936 | 24 | 107 | 104 |
| RTX 3080 | 760 | 10 | 87 | 83 |
| RTX 3060 | 360 | 12 | 41 | 38 |
| RTX 4060 Ti 16GB | 288 | 16 | 33 | 31 |
| RX 7900 XTX | 960 | 24 | 110 | 107 |
| RX 6800 XT | 512 | 16 | 58 | 55 |

Data Takeaway: The table confirms that memory bandwidth is the primary predictor. The RTX 3060, despite having 12GB VRAM, is nearly 3x slower than an RTX 4090. The RTX 4060 Ti 16GB, with its narrow 128-bit memory bus, is actually slower than the older RTX 3060 for this task. This disproves the assumption that more VRAM alone guarantees faster inference.

The calculator also models the impact of context length. As the context window grows, the key-value (KV) cache expands linearly. For a 7B model at 4-bit, the KV cache consumes approximately 1.5 GB per 32k tokens. At 128k context, this adds ~6 GB of memory pressure, reducing the effective bandwidth available for weight loading. The tool accurately predicts a 15-20% speed drop when moving from 4k to 128k context on an RTX 4090.

Key Players & Case Studies

The calculator's development was spearheaded by a collective of independent researchers and engineers from the open-source LLM community, including notable contributors from the `llama.cpp` and `vLLM` projects. The lead maintainer, known by the pseudonym 'bandwidth_wizard', has published detailed technical blog posts explaining the memory-bound model. The project has received direct contributions from engineers at companies like NVIDIA and AMD, who provided internal benchmark data for unreleased GPU variants, suggesting the tool's findings are taken seriously by hardware vendors.

Comparison Table: Competing Inference Optimization Approaches

| Approach | Focus | Impact on Speed | Impact on VRAM Usage | Complexity |
|---|---|---|---|---|
| Quantization (GPTQ/AWQ) | Reduce model weight size | High (2-4x speedup) | High (2-4x reduction) | Low (one-time conversion) |
| Speculative Decoding | Reduce number of forward passes | Moderate (1.5-2x speedup) | Low (needs draft model) | High (requires training) |
| FlashAttention | Optimize attention kernel | Moderate (1.2-1.5x speedup) | Low (reduces memory reads) | Medium (kernel fusion) |
| Memory Bandwidth Optimization | Hardware-level | Dependent on GPU | None | N/A (hardware choice) |

Data Takeaway: Quantization offers the highest speedup for the lowest complexity. However, its effectiveness is ultimately limited by memory bandwidth. The calculator makes this trade-off explicit: a 4-bit model on a bandwidth-starved GPU may still be slower than an 8-bit model on a high-bandwidth GPU.

Industry Impact & Market Dynamics

The calculator's insights are already reshaping the hardware landscape for edge AI. The traditional wisdom—buy the GPU with the most VRAM you can afford—is being replaced by a more nuanced calculation: maximize memory bandwidth per dollar, subject to VRAM meeting the minimum model size. This shift has direct implications for product design.

Market Data Table: Consumer GPU Sales & AI Workloads (2024-2025)

| GPU Segment | 2024 Market Share (AI inference) | 2025 Projected Share | Average Bandwidth (GB/s) | Average VRAM (GB) |
|---|---|---|---|---|
| High-End (RTX 4090, 7900 XTX) | 15% | 12% | 950 | 24 |
| Mid-Range (RTX 4070, 7800 XT) | 45% | 50% | 550 | 16 |
| Entry-Level (RTX 4060, 7600) | 40% | 38% | 300 | 12 |

Data Takeaway: Mid-range GPUs are gaining share for AI inference. Their bandwidth-to-VRAM ratio is often better than entry-level cards. The calculator helps developers identify which mid-range card offers the best 'tokens per dollar' for their specific model.

For startups building local AI applications, the calculator eliminates the guesswork. A company developing a local code assistant can now model: 'If we use a 13B model at 4-bit, we need at least 8GB VRAM. Our target hardware is an RTX 4060. The calculator says we'll get ~30 tokens/sec. That's acceptable for a single-user tool. We can proceed.' This reduces the risk of building a product that is unusably slow on the target hardware.

Risks, Limitations & Open Questions

While the calculator is a powerful tool, it has limitations. First, it assumes the inference engine is optimally configured. Using a suboptimal backend (e.g., default PyTorch without CUDA graphs) can underperform by 20-30%. Second, the benchmark dataset is currently biased toward NVIDIA GPUs, with limited data for AMD and Intel Arc cards. The community is actively working to expand coverage. Third, the model does not account for multi-GPU setups or CPU offloading, which are increasingly common for larger models. Finally, the calculator's predictions are for single-user, batch-size-1 inference. For server-side deployment with concurrent requests, the bottleneck shifts to compute (due to batching), and the tool's assumptions break down.

AINews Verdict & Predictions

The local LLM speed calculator is not just a useful utility; it is a paradigm shift for edge AI deployment. It exposes a fundamental truth that hardware vendors have been reluctant to acknowledge: memory bandwidth is the new currency of local AI performance.

Our Predictions:
1. GPU vendors will prioritize memory bandwidth. The next generation of consumer GPUs (RTX 5000 series, RDNA 4) will feature wider memory buses and faster GDDR7 memory, even on mid-range cards. The 'VRAM arms race' will be superseded by a 'bandwidth arms race.'
2. Quantization will become a standard feature of hardware specifications. Expect GPU spec sheets to include 'effective bandwidth for 4-bit models' as a marketing metric.
3. The calculator will be integrated into major AI deployment frameworks. Tools like `llama.cpp`, `Ollama`, and `LM Studio` will likely embed the calculator's logic to automatically recommend optimal quantization and context length settings based on the user's hardware.
4. Edge AI hardware design will bifurcate. One path will focus on high-bandwidth, low-power memory (e.g., HBM on mobile GPUs). The other will focus on compute-in-memory architectures that eliminate the bandwidth bottleneck entirely.

The 'invisible ceiling' has been quantified. The developers who pay attention will build faster, cheaper, and more reliable local AI products. Those who ignore it will wonder why their 24GB GPU feels slower than a 16GB one.

More from Hacker News

Huellas de comportamiento: cómo los bots navegadores basados en LLM dejan rastros inconfundibles en la interfaz de usuarioThe discovery of behavioral fingerprints in LLM-powered browser agents marks a pivotal moment for the AI industry. ReseaCeguera temporal: por qué los LLM no pueden comprender causa y efectoA new open-source research paper, led by a team from MIT and the University of Cambridge, has systematically demonstrateWhichLLM: La herramienta de código abierto que empareja modelos de IA con tu hardwareThe open-source project WhichLLM has emerged as a practical solution to a growing pain point: how to choose the best locOpen source hub3437 indexed articles from Hacker News

Archive

May 20261639 published articles

Further Reading

WhichLLM: La herramienta de código abierto que empareja modelos de IA con tu hardwareWhichLLM es una herramienta de código abierto que recomienda el mejor modelo de lenguaje grande local para tu configuracAgentes de IA Linux en un Solo Binario: La Revolución Silenciosa que Descentraliza la InteligenciaUn nuevo proyecto de código abierto comprime un agente completo basado en LLM —incluyendo planificación, ejecución de cóRevolución de la cuantización: cómo el adelgazamiento de modelos desbloquea un cambio de IA de billones de dólaresLa cuantización está reescribiendo silenciosamente la economía de la IA. Al comprimir la precisión del modelo de 32 bitsEl marco NARE cristaliza el razonamiento de los LLM en scripts Python ultrarrápidosAINews ha descubierto NARE, un marco que cristaliza el razonamiento de los grandes modelos de lenguaje en scripts Python

常见问题

这次模型发布“Local LLM Speed Calculator Reveals Memory Bandwidth as True GPU Bottleneck”的核心内容是什么?

For years, developers deploying large language models locally have operated in a frustrating black box. They know their GPU's VRAM capacity, but they cannot reliably predict how fa…

从“How to calculate local LLM inference speed on RTX 3060”看,这个模型发布为什么重要?

The core insight behind the speed calculator is a deceptively simple formula: Inference Speed (tokens/sec) ≈ (Memory Bandwidth) / (Model Size in bytes per token). This relationship holds because the dominant operation in…

围绕“Best GPU for local LLM inference 2025”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。