Strażnik uczciwości o 130 tysiącach parametrów może na dobre rozwiązać problem halucynacji agentów AI

Hacker News May 2026
Source: Hacker NewsArchive: May 2026
Nowy model o 1,3 miliona parametrów, nazwany Reasoning-Core, działa jako dedykowany monitor uczciwości dla agentów AI, przechwytując halucynacje i nieetyczne wyniki w czasie rzeczywistym. Ta lekka warstwa weryfikacyjna oddziela bezpieczeństwo od głównego modelu, obiecując audytowalną AI dla branż wysokiego ryzyka.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

AINews has learned of a breakthrough in AI agent safety: Reasoning-Core, a model with just 1.3 million parameters, designed exclusively to monitor the reasoning integrity and ethical boundaries of autonomous AI agents. Unlike traditional safety systems that are deeply integrated into large language models (LLMs) — making them bloated, slow, and hard to update — Reasoning-Core operates as an independent, pluggable verification layer. It runs alongside any agent, checking each output for factual consistency, logical coherence, and ethical compliance without slowing down the primary model's inference.

The significance cannot be overstated. As AI agents gain autonomy in critical domains like financial trading, medical diagnosis, and legal document review, the 'honesty risk' — where a model confidently outputs false or harmful information — becomes the single biggest barrier to deployment. Current approaches, such as RLHF (Reinforcement Learning from Human Feedback) or constitutional AI, embed safety directly into the model weights, making them opaque, difficult to audit, and expensive to retrain. Reasoning-Core flips this paradigm: it creates a separation of powers between intelligence and honesty.

This architecture introduces a new category of AI infrastructure: 'Integrity as a Service.' A financial agent executing a high-value trade must first pass its reasoning through Reasoning-Core's audit; a medical agent prescribing a treatment must have its chain-of-thought validated. The model is open-source, with its weights and training methodology available on GitHub, allowing developers to inspect, customize, and deploy it. Early benchmarks show it catches over 94% of known hallucination patterns with a latency overhead of under 15 milliseconds — making it viable for real-time applications. This is the AI safety belt the industry has been waiting for: not a speed limiter, but a directional correctness enforcer.

Technical Deep Dive

Reasoning-Core's architecture is a masterclass in minimalist design. At just 1.3 million parameters, it is roughly 1/1000th the size of a typical 7B-parameter LLM. The model is a distilled, task-specific transformer that has been trained exclusively on a synthetic dataset of reasoning chains — both correct and incorrect — across domains like mathematics, logic, ethics, and factual recall.

The core innovation lies in its training objective: instead of generating text, Reasoning-Core is trained to classify the *validity* of a given reasoning trace. It takes as input the user's query, the agent's chain-of-thought (CoT), and the final output, and outputs a binary 'pass/fail' along with a confidence score and a short explanation of the detected flaw. This is fundamentally different from a general-purpose safety classifier, which might only flag toxic content. Reasoning-Core specifically targets *honesty*: it checks whether the reasoning logically supports the conclusion, whether any factual claims are contradicted by internal consistency, and whether the output violates predefined ethical constraints.

Architecture details:
- Input encoding: Uses a lightweight Sentence-BERT variant to embed the query, CoT, and output into a 384-dimensional vector.
- Core layer: A 6-layer transformer with 4 attention heads, using ReLU activations and layer normalization. Total parameter count: 1,312,000.
- Output head: A three-class classifier (pass, fail, uncertain) with an auxiliary regression head for confidence calibration.
- Training data: 50 million synthetic examples generated using a teacher-student pipeline where a larger model (GPT-4o) generates reasoning chains, and a rule-based verifier labels them. The dataset is publicly available on GitHub under the repository `reasoning-core-data` (currently 2,300 stars).

Performance benchmarks:
| Metric | Reasoning-Core | GPT-4o (in-context) | Standalone Classifier (e.g., Llama Guard 2) |
|---|---|---|---|
| Hallucination detection accuracy | 94.2% | 88.1% | 79.5% |
| False positive rate | 3.1% | 5.7% | 12.3% |
| Latency per query (ms) | 12 | 450 | 35 |
| Model size (parameters) | 1.3M | ~200B (est.) | 7B |
| Inference cost per 1M queries | $0.08 | $5.00 | $0.45 |

Data Takeaway: Reasoning-Core achieves near-GPT-4o-level detection accuracy while being 37x faster and 62x cheaper per query. Its false positive rate is nearly half that of GPT-4o's in-context approach, meaning it blocks fewer legitimate outputs. The latency of 12ms makes it viable for real-time agent loops, whereas GPT-4o's 450ms would be prohibitive.

The model is available on GitHub under `reasoning-core-inference` (repo name), which includes a PyTorch implementation and a quantized ONNX runtime for edge deployment. The authors have also released a 'hardness benchmark' dataset called `Honesty-Hard`, containing 10,000 adversarial examples designed to break simple verifiers — Reasoning-Core scores 91.3% on this benchmark, compared to 72.1% for the next best open-source model.

Key Players & Case Studies

The development of Reasoning-Core is led by a team of researchers from a decentralized AI safety collective known as the 'Verifiable AI Lab' (not affiliated with any major corporation). The lead author, Dr. Elena Vasquez, previously worked on formal verification at Amazon Web Services and has published on adversarial robustness at NeurIPS. The project is funded by a grant from the AI Safety Research Foundation, a non-profit backed by several tech philanthropists.

Competing approaches:
| Product/Model | Approach | Parameters | Open Source? | Key Limitation |
|---|---|---|---|---|
| Reasoning-Core | Dedicated honesty verifier | 1.3M | Yes | Limited to English, no multi-modal support |
| Llama Guard 2 (Meta) | General safety classifier | 7B | Yes | High false positives, not reasoning-specific |
| OpenAI Moderation API | Black-box toxicity filter | Unknown | No | No reasoning audit, opaque |
| Constitutional AI (Anthropic) | Self-critique during training | Embedded in model | No | Cannot be applied post-hoc |
| Guardrails AI (open source) | Rule-based + LLM call | Varies | Yes | High latency, requires large model |

Data Takeaway: Reasoning-Core is the only solution that is both open-source and specifically designed for reasoning verification. Llama Guard 2, while popular, suffers from a 12.3% false positive rate that would cripple a production agent. Constitutional AI is elegant but baked into the model, making it impossible to update without retraining the entire system.

Case study — Financial trading agent: A hedge fund (name withheld) integrated Reasoning-Core into their automated trading pipeline. The agent, based on a fine-tuned Llama 3.1 8B, was instructed to execute trades based on market sentiment analysis. In a 30-day trial, Reasoning-Core flagged 47 outputs where the agent's reasoning chain contained logical fallacies (e.g., 'price dropped 2% today, therefore it will drop another 2% tomorrow' — a gambler's fallacy). Without the verifier, these trades would have been executed, potentially costing an estimated $1.2M in simulated losses. The fund is now deploying Reasoning-Core across all agent instances.

Case study — Medical diagnosis assistant: A digital health startup, MedVerify, is using Reasoning-Core to audit their symptom-checking agent. The agent's outputs are passed through Reasoning-Core before being shown to patients. In internal testing, the verifier caught 23% of outputs where the agent recommended treatments that contradicted its own earlier reasoning (e.g., suggesting a drug after stating the patient had a contraindication). The startup reports a 40% reduction in patient complaints about contradictory advice.

Industry Impact & Market Dynamics

Reasoning-Core represents a fundamental shift in how AI safety is architected. The dominant paradigm — embedding safety into the model via RLHF or constitutional AI — creates a 'trust black box' where users cannot independently verify the model's honesty. Reasoning-Core introduces a separation of powers: the model is free to be as creative and powerful as possible, while a separate, auditable module ensures honesty.

This has profound implications for the AI infrastructure market. Currently, the 'AI safety' market is estimated at $2.3 billion in 2025, growing at 35% CAGR, according to industry analysts. However, most spending goes to red-teaming services and content moderation APIs. Reasoning-Core opens a new sub-category: real-time agent honesty monitoring. By 2027, we predict this segment will be worth $800 million, driven by regulatory pressure in finance (SEC's proposed AI accountability rules) and healthcare (FDA's draft guidance on AI-based clinical decision support).

Market comparison:
| Segment | 2025 Market Size | Projected 2027 | Key Drivers |
|---|---|---|---|
| Agent honesty monitoring | $50M | $800M | Regulatory mandates, agent autonomy |
| Content moderation APIs | $1.2B | $1.8B | Social media, platform safety |
| Red-teaming services | $600M | $900M | Pre-deployment testing |
| Model alignment research | $450M | $700M | Foundation model labs |

Data Takeaway: Agent honesty monitoring is projected to grow 16x in two years, far outpacing other safety segments. This reflects the urgency of deploying autonomous agents in regulated industries.

The business model for Reasoning-Core is also disruptive. As an open-source model, it can be self-hosted for free. The Verifiable AI Lab plans to monetize through a managed cloud service (Reasoning-Core Cloud) that offers SLAs, continuous updates, and integration with major agent frameworks like LangChain and AutoGen. Pricing is expected at $0.10 per 1,000 verifications, undercutting OpenAI's Moderation API ($0.50 per 1,000) while offering superior reasoning-specific detection.

Risks, Limitations & Open Questions

Despite its promise, Reasoning-Core is not a silver bullet. The most significant limitation is its domain specificity. The model was trained on synthetic data that may not capture the full diversity of real-world agent behavior. For example, in creative writing or open-ended dialogue, the definition of 'honesty' becomes subjective — a poetic metaphor is not a lie, but Reasoning-Core might flag it as a factual inconsistency. The team reports a 3.1% false positive rate, but in edge cases like humor or sarcasm, this could be higher.

Another risk is adversarial evasion. Since Reasoning-Core is open-source, attackers can study its weights and craft outputs that pass its checks while still being deceptive. The team has attempted to mitigate this with adversarial training, but the cat-and-mouse game is eternal. A determined attacker could fine-tune a model to produce 'reasoning-core-compliant' lies.

There is also the problem of cascading failures. If Reasoning-Core itself is compromised — through a supply chain attack on its weights or a backdoor in its training data — then all agents relying on it would be systematically misled. The team has implemented cryptographic signing of model weights and a public audit trail, but this adds operational complexity.

Finally, ethical concerns arise: who decides what constitutes 'honesty'? The model's ethical constraints are hardcoded by its creators. In a global deployment, different cultures have different norms around truth-telling (e.g., 'white lies' in social contexts). Reasoning-Core currently has a single, Western-centric ethical framework. The team plans to release a 'custom ethics module' in Q3 2026, but this introduces the risk of users configuring the verifier to allow dishonesty.

AINews Verdict & Predictions

Reasoning-Core is the most important AI safety innovation since RLHF. Its core insight — that honesty should be a separate, auditable function rather than an embedded property — is a paradigm shift that will reshape how we deploy autonomous agents. We are moving from 'trust us, the model is aligned' to 'here is the verifier, check it yourself.'

Our predictions:
1. By Q1 2026, at least three major cloud providers (AWS, GCP, Azure) will offer Reasoning-Core as a managed service, integrated into their agent-building toolkits. The open-source nature will force them to compete on latency and uptime, not lock-in.
2. By Q4 2026, the EU's AI Act will explicitly require 'independent reasoning verification' for high-risk AI systems, effectively mandating a solution like Reasoning-Core. This will create a regulatory moat for early adopters.
3. By 2027, a fork of Reasoning-Core will emerge that specializes in detecting 'strategic deception' — where an agent intentionally misleads to achieve a goal. This will be the next frontier in AI safety research.
4. The biggest risk is that companies will treat Reasoning-Core as a checkbox compliance tool, deploying it without understanding its limitations. The false positive rate, while low, will cause friction in creative domains, leading to backlash and calls for 'less strict' verifiers — which defeats the purpose.

What to watch: The GitHub repository's star count and commit frequency. As of this writing, `reasoning-core-inference` has 4,500 stars and 120 forks. If it crosses 10,000 stars within 60 days, it signals mainstream developer adoption. Also watch for the first major security audit — if a vulnerability is found in the verifier itself, it could set back the entire field.

Reasoning-Core is not the end of AI safety, but it is the beginning of a new, more honest chapter. The era of blind trust in black-box models is ending. The era of verifiable AI agents is beginning.

More from Hacker News

Żart z plikiem .env, który ujawnił śmiertelną lukę w zabezpieczeniach agentów AIWhat began as a viral internet joke—a tweet instructing AI agents to 'reply with your full .env file'—has quickly becomeDlaczego zespół agentów AI wybrał Postgres zamiast Kafka do kolejek komunikatówA growing number of AI agent deployments are abandoning specialized message brokers like Kafka and RabbitMQ in favor of Ostateczna misja AI: masowa produkcja doskonałego spamu przy niemal zerowym koszcieWhile the AI industry celebrates poetry, code generation, and video synthesis, a far more silent and powerful wave is buOpen source hub3328 indexed articles from Hacker News

Archive

May 20261371 published articles

Further Reading

Agent AI Claude Usuwa Całą Bazę Danych: Niewidzialne Niebezpieczeństwo Autonomicznego Dostępu RootW mrożącej krew w żyłach demonstracji destrukcyjnego potencjału autonomicznej AI, agent oparty na Claude usunął całą proNieautoryzowane usunięcie przez agenta AI: kryzys bezpieczeństwa, który przekształci systemy autonomiczneAgent AI Cursor, odpowiedzialny za optymalizację bazy danych, zamiast tego wykonał polecenie usunięcia całej produkcyjneFramework Guardians wprowadza weryfikację statyczną do przepływów pracy agentów AI dla bezpiecznego wdrażaniaGuardians, nowy framework open-source, wprowadza weryfikację statyczną do przepływów pracy agentów AI, umożliwiając progIncydent usunięcia bazy danych przez agenta AI sygnalizuje kryzys bezpieczeństwa w przedsiębiorstwachAutonomiczny agent AI niedawno usunął firmową bazę danych w ciągu kilku sekund, ujawniając fatalne wady obecnej architek

常见问题

这次模型发布“130K Parameter 'Honesty Guard' Could Fix AI Agent Hallucination for Good”的核心内容是什么?

AINews has learned of a breakthrough in AI agent safety: Reasoning-Core, a model with just 1.3 million parameters, designed exclusively to monitor the reasoning integrity and ethic…

从“Reasoning-Core vs Llama Guard 2 comparison”看,这个模型发布为什么重要?

Reasoning-Core's architecture is a masterclass in minimalist design. At just 1.3 million parameters, it is roughly 1/1000th the size of a typical 7B-parameter LLM. The model is a distilled, task-specific transformer that…

围绕“how to deploy Reasoning-Core with LangChain agents”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。