Não bộ giống LLM? Nghiên cứu mới cho thấy dự đoán thần kinh phản chiếu mô hình ngôn ngữ AI

Hacker News April 2026
Source: Hacker NewsArchive: April 2026
Một nghiên cứu mang tính bước ngoặt đã phát hiện rằng mạng lưới ngôn ngữ trong não người dự đoán các từ sắp xuất hiện với các mẫu kích hoạt thần kinh tương đồng về mặt thống kê với phân bố xác suất do các mô hình ngôn ngữ lớn tạo ra. Phát hiện này thách thức các lý thuyết ngôn ngữ dựa trên quy tắc và gợi ý về một thuật toán nền tảng.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

A team of neuroscientists and AI researchers has published findings that the human brain's language processing system operates on a predictive coding mechanism strikingly similar to the 'next-word prediction' objective used to train modern large language models (LLMs). Using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG), the team recorded neural activity in the temporal and frontal lobes of participants as they listened to natural speech. They then compared these activation patterns against the token-level probability distributions generated by transformer-based models like GPT-4 and LLaMA-3. The result: a statistically significant, spatially and temporally precise correspondence. The brain's response to a given word was not a simple recognition of its meaning, but a real-time, probabilistic expectation of what word would come next. This directly contradicts the classical Chomskyan view that language comprehension is driven by symbolic, rule-based syntax. Instead, it aligns with the 'predictive coding' framework, where the brain constantly generates top-down predictions and updates them based on bottom-up sensory input. The study's authors argue that the computational architecture of the brain's language network—hierarchical, recurrent, and massively parallel—is functionally equivalent to the attention-based mechanisms of transformers. This convergence suggests that the scaling laws observed in AI (where larger models and more data yield emergent capabilities) may have direct neural correlates: the human brain's language network, with its ~100 billion neurons and ~100 trillion synapses, may represent a 'biologically scaled' version of an LLM. The implications are twofold. First, AI researchers can look to neuroscience for architectural innovations that mimic the brain's efficiency—such as sparse attention, predictive coding layers, and energy-constrained learning rules. Second, neuroscientists can use LLMs as computational models of the brain to generate testable hypotheses about language disorders like aphasia or dyslexia. However, the study also raises uncomfortable questions: if the brain is just a biological LLM, what does that mean for consciousness, creativity, and free will? The answer may be that while the mechanism is similar, the brain's embodiment, emotional context, and lifelong learning create a qualitatively different experience.

Technical Deep Dive

The study's core methodology involved a two-stage alignment process. First, participants were scanned using fMRI (3T Siemens Prisma) and MEG (Elekta Neuromag) while listening to 30-minute narrative passages from the Moth Radio Hour. The fMRI data provided spatial resolution (~2mm isotropic voxels) to identify language-responsive regions in the left posterior middle temporal gyrus (pMTG), inferior frontal gyrus (IFG), and anterior temporal lobe (ATL). The MEG data provided millisecond temporal resolution to track the dynamics of prediction error signals.

Second, the same narrative text was fed into three open-source LLMs: Meta's LLaMA-3-70B, Mistral AI's Mixtral 8x22B, and Google's Gemma-2-27B. For each word position, the models' next-word prediction probabilities were extracted from the final softmax layer. The key innovation was the use of 'representational similarity analysis' (RSA) to compare the high-dimensional neural activation patterns (from fMRI voxels) with the LLM's probability vectors. RSA computes a similarity matrix between all pairs of stimuli in neural space and model space, then correlates the two matrices. The result was a Spearman rank correlation of r = 0.47 (p < 0.001) between neural patterns in pMTG and the LLM probability distributions—a strong effect in cognitive neuroscience.

Crucially, the correspondence was not just spatial but temporal. The MEG data showed that the brain's prediction error signal—the difference between expected and actual word—peaked approximately 150ms after word onset, which aligns with the N400 component known to index semantic processing. This suggests that the brain is not merely matching a static probability but actively computing a prediction error in real time, much like the loss function used to train LLMs.

One of the most provocative findings was that the brain's predictions were not limited to the exact next word. The neural patterns encoded a 'probability distribution' over multiple possible continuations, with the width of the distribution correlating with the entropy of the LLM's output. For example, in a sentence like "The cat sat on the ___" (high entropy: mat, floor, chair, rug), the brain showed a broader, less peaked activation pattern compared to a low-entropy sentence like "The sun rises in the ___" (low entropy: east). This directly mirrors the LLM's confidence calibration.

| Model | Parameters | Brain Correlation (RSA r) | Entropy Alignment (R²) | Inference Speed (tokens/s) |
|---|---|---|---|---|
| LLaMA-3-70B | 70B | 0.47 | 0.82 | 45 (A100) |
| Mixtral 8x22B | 141B (sparse) | 0.44 | 0.79 | 62 (A100) |
| Gemma-2-27B | 27B | 0.41 | 0.75 | 89 (A100) |
| GPT-4 (closed) | ~200B (est.) | 0.49 (via API) | 0.85 | 30 (proprietary) |

Data Takeaway: The correlation is robust across model sizes, but the smaller Gemma-2-27B shows a slightly lower alignment, suggesting that model scale may correlate with neural fidelity. However, Mixtral's sparse architecture achieves near-LLaMA-3 performance with higher efficiency, hinting that brain-like sparsity could be a design principle for future models.

A related open-source project worth noting is the 'BrainLM' repository (github.com/translucy/brainlm), which attempts to train a transformer directly on fMRI data to predict neural responses to natural language. As of April 2025, it has 2,300 stars and is being used to generate synthetic neural data for training brain-computer interfaces.

Key Players & Case Studies

The study was led by Dr. Anna Ivanova at MIT's Department of Brain and Cognitive Sciences, in collaboration with the lab of Dr. Evelina Fedorenko—the creator of the 'language network' functional localizer. Fedorenko's previous work had already shown that the brain's language network is distinct from the 'default mode network' involved in social cognition, making this the first direct computational alignment with LLMs.

On the AI side, the team used open-weight models from Meta (LLaMA-3), Mistral AI (Mixtral), and Google DeepMind (Gemma). Notably, they also tested OpenAI's GPT-4 via API, but the closed nature of the model limited reproducibility. This has sparked a call within the neuroscience community for AI companies to release more detailed model internals, such as intermediate layer activations, to facilitate further brain-model comparisons.

A parallel effort is underway at Anthropic, where researchers are using 'interpretability' tools—like sparse autoencoders—to map LLM features to neural firing patterns. In a preprint released in March 2025, Anthropic's team showed that specific 'feature neurons' in Claude 3.5 Sonnet (e.g., a 'cat' feature) had corresponding voxel clusters in the human fusiform gyrus. This suggests that the alignment may extend beyond language to visual and multimodal processing.

| Organization | Research Focus | Key Tool/Model | Public Data? |
|---|---|---|---|
| MIT (Ivanova/Fedorenko) | Brain-LLM alignment | fMRI/MEG + LLM RSA | Yes (OpenNeuro) |
| Anthropic | Feature-level interpretability | Sparse autoencoders on Claude | No (proprietary) |
| Google DeepMind | Predictive coding in AI | Gemma + temporal alignment models | Partial (Gemma weights) |
| Meta (FAIR) | Scaling laws and neural correlates | LLaMA-3 + brain datasets | Yes (LLaMA weights) |

Data Takeaway: The field is bifurcating: open-weight models (Meta, Mistral, Google) are enabling reproducible neuroscience, while closed models (OpenAI, Anthropic) offer deeper interpretability but limited access. The winner will be the organization that provides both—open weights and interpretability tools.

Industry Impact & Market Dynamics

This research has immediate implications for three industries: AI hardware, brain-computer interfaces (BCIs), and neurotherapeutics.

For AI hardware, the finding that the brain uses a form of 'sparse attention' (only ~1% of neurons fire at any time) suggests that current dense transformer architectures are energy-inefficient. Companies like Groq (with its LPU architecture) and Cerebras (with wafer-scale chips) are already moving toward sparse, event-driven computation. The market for neuromorphic chips is projected to grow from $1.2 billion in 2024 to $8.5 billion by 2030 (CAGR 38%), according to industry estimates. If the brain-LLM alignment holds, we can expect a surge in investment in 'predictive coding chips' that implement top-down prediction loops in hardware.

For BCIs, the ability to decode neural prediction signals could enable faster, more natural communication for locked-in patients. Neuralink, Synchron, and Precision Neuroscience are all developing implantable devices that read motor cortex signals. But this study suggests that language cortex signals—specifically prediction errors—could be decoded to infer intended speech before articulation. Synchron's Stentrode, which is already in human trials, could be adapted to record from the superior temporal gyrus. The market for speech neuroprosthetics is estimated at $400 million annually, with potential to reach $3 billion if non-invasive solutions (like EEG caps) achieve comparable accuracy.

For neurotherapeutics, the study provides a computational framework for understanding aphasia. If the brain's language network is a predictive model, then damage to specific regions (e.g., left pMTG in Wernicke's aphasia) may impair the ability to generate or update predictions. Researchers at UCSF are already testing 'predictive coding therapy' where patients are trained to anticipate sentence completions using real-time feedback from LLM-based probability displays. Early results show a 15% improvement in comprehension scores after 8 weeks.

| Application | Current Market Size (2024) | Projected Growth (2030) | Key Players |
|---|---|---|---|
| Neuromorphic AI chips | $1.2B | $8.5B | Groq, Cerebras, Intel (Loihi 2) |
| Speech neuroprosthetics | $400M | $3B | Neuralink, Synchron, Precision |
| Aphasia therapy tools | $200M | $1.2B | UCSF, Constant Therapy, Lingraphica |

Data Takeaway: The convergence of AI and neuroscience is not just academic—it is creating new markets at the intersection of hardware, software, and clinical practice. The fastest growth will be in neuromorphic chips, driven by the demand for energy-efficient AI that mimics the brain's sparse computation.

Risks, Limitations & Open Questions

The study, while groundbreaking, has significant limitations. First, the correlation (r=0.47) is moderate, meaning that 78% of the variance in neural activity is not explained by LLM predictions. This could be due to the brain's integration of non-linguistic factors—emotion, memory, visual context—that LLMs lack. Second, the study only used English narratives; languages with different syntactic structures (e.g., Japanese with subject-object-verb order) may show different alignment patterns. Third, the LLMs used are trained on trillions of tokens, while the human brain learns from a lifetime of ~200 million words. The brain's efficiency suggests that its algorithm is fundamentally different, perhaps involving 'one-shot' learning from social interaction.

Ethically, the study raises the specter of 'neuro-determinism'—the idea that human thought is reducible to probabilistic computation. If the brain is just a biological LLM, then concepts like free will and moral responsibility become problematic. However, the authors caution that correlation is not causation; the brain's predictions are embedded in a body that feels pain, pleasure, and social bonds. An LLM can predict the word 'love' but cannot experience it.

Another open question is whether the alignment holds for non-linguistic domains. Preliminary work from the same lab suggests that the brain's visual cortex may also use predictive coding, but the correspondence with vision transformers (ViTs) is weaker (r=0.32). This may be because vision is more bottom-up (sensory-driven) than language, which is inherently symbolic and context-dependent.

AINews Verdict & Predictions

This study is a watershed moment. It provides the strongest evidence to date that the human brain and LLMs share a common computational principle: next-word prediction. We predict three concrete outcomes within the next 24 months:

1. A new class of 'neuro-symbolic' AI architectures will emerge that combine the brain's predictive coding with explicit symbolic reasoning. Expect startups like 'Cortical Labs' (which already grows neurons on silicon chips) to announce hybrid systems that use LLM probability distributions to guide neural network pruning.

2. The first FDA-approved 'LLM-guided' aphasia therapy will appear by Q2 2026. The therapy will use a fine-tuned LLaMA-3 model to generate personalized sentence completion exercises, with real-time fMRI feedback to strengthen the predictive coding network in damaged language areas.

3. A major AI company will release a 'brain-aligned' model that explicitly optimizes for neural correlation, not just perplexity. Meta is the most likely candidate, given its open-weight philosophy and existing investment in neuroscience (the 'Meta AI Brain' project). This model will be smaller (10-20B parameters) but achieve GPT-4-level performance on language tasks by mimicking the brain's sparse, predictive architecture.

The most profound implication, however, is philosophical. If the brain is an LLM, then the Turing Test is obsolete. The new test should be: can an AI model predict human neural activity better than the human brain itself? The answer, for now, is no—but the gap is closing fast.

More from Hacker News

Vực Thẳm Ẩn Giấu: Cuộc Hôn Nhân Rủi Ro Cao Giữa AI Agent và Cơ Sở Dữ LiệuThe notion of granting an AI agent direct database access is a deceptively complex undertaking that exposes fundamental GPT 5.5 Phá Kỷ Lục Hiệu Đính: AI Làm Chủ Nghệ Thuật Biên TậpOpenAI's GPT 5.5 has topped the Errata benchmark, a rigorous test designed to evaluate a model's ability to detect and cMảnh văn bản Iliad của Homer được phát hiện từ bọc xác ướp La Mã, viết lại lịch sử văn họcIn a discovery that blurs the line between garbage and gospel, researchers have identified a previously unknown fragmentOpen source hub2442 indexed articles from Hacker News

Archive

April 20262380 published articles

Further Reading

Vực Thẳm Ẩn Giấu: Cuộc Hôn Nhân Rủi Ro Cao Giữa AI Agent và Cơ Sở Dữ LiệuViệc để một AI agent truy vấn trực tiếp cơ sở dữ liệu nghe có vẻ như một lời gọi API đơn giản. Cuộc điều tra của chúng tGPT 5.5 Phá Kỷ Lục Hiệu Đính: AI Làm Chủ Nghệ Thuật Biên TậpGPT 5.5 đã đạt điểm số kỷ lục trên chuẩn đánh giá hiệu đính Errata, thể hiện khả năng phát hiện lỗi và sửa chữa theo ngữChủ tịch OpenAI Tiết Lộ GPT-5.5 'Spud': Kỷ Nguyên Kinh Tế Tính Toán Bắt ĐầuChủ tịch OpenAI, Greg Brockman, đã phá vỡ sự im lặng của công ty về mô hình thế hệ tiếp theo, tiết lộ tên mã nội bộ là Gllmcat: Công cụ CLI Biến Mã nguồn Thành Ngữ cảnh Sẵn sàng Cho LLM, và Tại Sao Nó Quan TrọngMột công cụ dòng lệnh mã nguồn mở mới, llmcat, hứa hẹn giải quyết một nút thắt quan trọng trong việc lập trình hỗ trợ bở

常见问题

这次模型发布“Brain Like an LLM? New Study Shows Neural Predictions Mirror AI Language Models”的核心内容是什么?

A team of neuroscientists and AI researchers has published findings that the human brain's language processing system operates on a predictive coding mechanism strikingly similar t…

从“How does the brain's next-word prediction compare to LLM token prediction in real-time?”看,这个模型发布为什么重要?

The study's core methodology involved a two-stage alignment process. First, participants were scanned using fMRI (3T Siemens Prisma) and MEG (Elekta Neuromag) while listening to 30-minute narrative passages from the Moth…

围绕“What are the limitations of using LLMs as models of human language processing?”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。