Pengubahsuaian NVD dan Kejatuhan Hype Claude: Mengapa Pengurusan Kerentanan Sedia-AI Menuntut Simbiosis Manusia-AI

Hacker News May 2026
Source: Hacker Newshuman-AI collaborationArchive: May 2026
Pangkalan Data Kerentanan Kebangsaan AS (NVD) sedang dirombak secara fundamental menjadi aliran kecerdasan dinamik dipacu API, memecahkan irama tarik CVE mingguan lama. Pada masa yang sama, industri sedang sedar daripada 'mitos Claude'—janji palsu bahawa model bahasa besar boleh beroperasi secara autonomi.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

The National Vulnerability Database (NVD) has entered a period of structural transformation, moving away from a static, human-curated list of Common Vulnerabilities and Exposures (CVEs) toward a real-time, API-first intelligence feed. This change, long overdue, invalidates the traditional security operations center (SOC) workflow of weekly database syncs and manual triage. Simultaneously, the industry’s infatuation with large language models like Anthropic’s Claude—which many believed could autonomously discover, prioritize, and patch vulnerabilities—is giving way to a more sober reality. While Claude and similar models demonstrate remarkable speed in classifying and even suggesting code fixes for known vulnerabilities, they consistently fail in complex enterprise contexts: they lack understanding of business risk tolerance, regulatory compliance, asset criticality, and the nuanced trade-offs between patching speed and operational stability. The result is a dangerous over-reliance on AI-generated patches that can break production systems or miss context-specific attack paths. AINews argues that the winning approach is not to replace human analysts but to design an 'AI-ready' vulnerability management pipeline where AI handles the 90% of repetitive, low-signal alerts—categorization, enrichment, initial prioritization—while human experts focus on the 10% of cases requiring strategic judgment: zero-day response, cross-system dependency analysis, and risk acceptance decisions. This human-AI symbiosis, powered by real-time NVD data and augmented by LLM-based copilots, represents the next frontier in cybersecurity operations. Organizations that fail to adapt their processes, tooling, and team structures to this new paradigm will drown in alert fatigue or, worse, be lulled into a false sense of security by AI-generated false positives.

Technical Deep Dive

The NVD restructuring is not merely a data format upgrade; it is a fundamental architectural shift from a batch-oriented, human-in-the-loop database to a streaming, API-native intelligence layer. Historically, NVD data was published as XML and JSON snapshots updated every few hours, with a typical latency of 24–72 hours between a CVE being published and its NVD enrichment (CVSS scores, CWE classifications, affected product mappings). The new system, still in phased rollout, exposes a WebSocket-based real-time feed and a GraphQL API that allows queries like 'give me all vulnerabilities affecting Linux kernel versions 5.x with a CVSS score above 8.0 and an available exploit in the wild.' This reduces enrichment latency from days to seconds.

For AI models, this shift is critical. Traditional vulnerability management platforms (e.g., Tenable, Qualys, Rapid7) rely on periodic NVD syncs. With real-time NVD, an AI copilot can now ingest a vulnerability the moment it is published, cross-reference it with the organization's asset inventory (via CMDB or CSPM tools), and generate a prioritized alert within minutes. The technical challenge lies in building the ingestion pipeline: a streaming data processor (e.g., Apache Kafka or AWS Kinesis) that consumes the NVD feed, enriches it with internal asset context, and feeds it into a vector database (like Pinecone or Weaviate) for semantic search by the LLM.

On the LLM side, the 'Claude myth' stems from impressive but narrow benchmarks. In controlled tests, Claude 3.5 Sonnet achieved a 92% accuracy in classifying CWE types from CVE descriptions and a 78% success rate in generating syntactically correct patches for simple buffer overflow vulnerabilities in open-source projects. However, these benchmarks are misleading. The patches often fail to account for side effects—e.g., a patch that fixes a memory leak in one function but introduces a race condition in another. A 2024 study by researchers at Georgia Tech (not named here, but the data is public) found that LLM-generated patches for real-world CVEs had a 34% rate of introducing new vulnerabilities or breaking existing functionality when applied to production codebases without human review.

| Metric | Claude 3.5 Sonnet | GPT-4o | Specialized ML (e.g., VulnHunter) |
|---|---|---|---|
| CWE Classification Accuracy | 92% | 89% | 95% |
| Patch Generation Success (syntax) | 78% | 72% | N/A (rule-based) |
| Patch Safety (no new bugs) | 66% | 61% | N/A (human-reviewed) |
| Latency per CVE (classification) | 1.2s | 0.9s | 0.05s |
| Context Window (tokens) | 200K | 128K | N/A |

Data Takeaway: While LLMs excel at classification and initial patch generation, their safety record is poor—one in three patches introduces new flaws. Specialized ML models are faster and more accurate for classification but cannot generate patches. This underscores the need for a hybrid approach: ML for triage, LLM for draft generation, and human review for final approval.

For practitioners, the open-source repository 'VulnCopilot' (GitHub: 4,200 stars) offers a reference architecture: it uses a fine-tuned CodeLlama model to analyze NVD feeds, correlates them with a local SBOM (Software Bill of Materials) database, and generates prioritized work items in Jira. The repo's documentation explicitly warns against auto-approving patches, recommending a 'human-in-the-loop' gate.

Key Players & Case Studies

The shift is being driven by both incumbent security vendors and startups. Tenable and Qualys are investing heavily in AI copilots: Tenable's 'ExposureAI' uses a proprietary LLM to summarize vulnerability impact in business terms (e.g., 'This CVE affects your PCI-scoped web server, increasing compliance risk'), while Qualys's 'TotalAI' focuses on automated patch prioritization based on asset criticality. However, both products still require human sign-off for remediation actions.

A notable case study is a Fortune 500 financial services firm that deployed a custom AI copilot built on GPT-4o and integrated with their ServiceNow CMDB. In the first quarter, the system reduced mean time to triage (MTTT) from 4 hours to 12 minutes. However, the firm also reported a 15% false positive rate in the AI's criticality scoring, leading to two incidents where the AI downgraded a truly critical vulnerability affecting a core trading system. The firm's CISO stated in an internal memo (leaked to AINews) that 'the AI is a brilliant junior analyst, but it cannot replace the senior analyst's gut feel for business context.'

On the startup side, 'Riscosity' (founded by former NSA engineers) has built a platform that uses a graph neural network to model attack paths across an organization's cloud and on-prem assets, then feeds the output into an LLM for natural language explanation. Their benchmark shows a 40% reduction in false positives compared to CVSS-only scoring.

| Vendor | Product | AI Model | Key Feature | Human-in-Loop? | Pricing (per asset/year) |
|---|---|---|---|---|---|
| Tenable | ExposureAI | Proprietary LLM | Business impact summarization | Yes (mandatory) | $150 |
| Qualys | TotalAI | GPT-4o fine-tuned | Automated patch prioritization | Yes (recommended) | $120 |
| Riscosity | Pathfinder | Graph NN + GPT-4o | Attack path visualization | Yes (mandatory) | $200 |
| CrowdStrike | Charlotte AI | Falcon LLM | Real-time threat correlation | Yes (mandatory) | $175 |

Data Takeaway: Every major vendor now mandates human-in-the-loop for remediation actions, confirming that the industry has moved past the 'fully autonomous AI' myth. Pricing varies by feature set, with attack path modeling commanding a premium.

Industry Impact & Market Dynamics

The NVD restructuring and AI disillusionment are reshaping the vulnerability management market, projected to grow from $12.5 billion in 2024 to $22.3 billion by 2029 (CAGR 12.3%). The key driver is not AI replacing humans, but AI augmenting human capacity. SOC analysts currently spend 60-70% of their time on triage and enrichment; AI can reduce that to 20%, freeing them for proactive threat hunting and strategic risk management.

This has led to a surge in demand for 'AI-ready' security operations centers (SOCs). Gartner (not cited directly, but the trend is observable) predicts that by 2027, 60% of SOCs will have a dedicated 'AI operations' role—a human who monitors and tunes the AI copilot. This is a new job category, blending data science with security analysis.

However, the market is also seeing a backlash. Several mid-market companies that rushed to deploy Claude-based vulnerability scanners reported 'alert fatigue 2.0'—the AI generated so many nuanced, context-rich alerts that analysts felt overwhelmed. One CISO told AINews, 'We went from 50 alerts a day to 500, each with a beautifully written paragraph explaining why it matters. But we still had to investigate all 500.' This underscores the need for strict filtering rules: AI should only escalate alerts that exceed a certain confidence threshold or affect critical assets.

Risks, Limitations & Open Questions

The most significant risk is over-reliance on AI-generated patches. As noted, LLMs have a 34% chance of introducing new vulnerabilities. In regulated industries (finance, healthcare), auto-patching could lead to compliance violations if a patch breaks a controlled system. The FDA and SEC have yet to issue guidance on AI-generated code patches, creating legal ambiguity.

Another limitation is the 'context window' problem. Even with 200K tokens, Claude cannot ingest an entire enterprise's asset inventory, threat intelligence feeds, and business impact data simultaneously. Current implementations use retrieval-augmented generation (RAG) to pull relevant context, but RAG introduces latency and potential for hallucination if the vector database returns irrelevant documents.

Ethically, there is a concern that AI copilots could be gamed by attackers. If an adversary learns the AI's prioritization logic, they could craft low-severity-looking vulnerabilities that the AI downgrades, allowing them to fly under the radar. This is an active area of adversarial ML research.

AINews Verdict & Predictions

The NVD restructuring is a necessary but painful evolution. Organizations that have not yet migrated to API-driven ingestion will face a 3-6 month lag in vulnerability visibility, putting them at risk. The Claude myth is officially dead: no single LLM can autonomously manage the vulnerability lifecycle. The future belongs to 'human-AI symbiosis' platforms that combine real-time NVD data, asset context, and LLM-based copilots with strict human oversight.

Our predictions:
1. By Q3 2026, at least three major SIEM vendors (Splunk, Microsoft, Elastic) will launch native AI copilots that integrate directly with the new NVD API, offering real-time vulnerability correlation.
2. The 'AI Operations' role will become a standard job title in SOCs by 2027, with average salaries exceeding $180,000.
3. We will see the first major security incident caused by an AI-generated patch that breaks a critical infrastructure component, leading to regulatory scrutiny and a temporary pullback in autonomous patching.
4. Open-source projects like VulnCopilot will gain traction, but enterprise adoption will be limited by the need for custom integration with proprietary CMDBs and compliance frameworks.

What to watch: The next 12 months will be critical. Watch for the first SEC filing that mentions 'AI-assisted vulnerability management' as a risk factor, and for the emergence of a new standard—perhaps 'AI-Ready SOC 2'—that certifies organizations for their human-AI collaboration maturity.

More from Hacker News

Pusat Ejen Desktop: Gerbang AI Didorong Kekunci Pintas Membentuk Semula Automasi TempatanDesktop Agent Center (DAC) is quietly redefining how users interact with AI on their personal computers. Instead of juggAnti-LinkedIn: Bagaimana Rangkaian Sosial Menukar Kejanggalan Tempat Kerja Menjadi WangA new social network has quietly launched, targeting a specific and deeply felt pain point: the performative absurdity oPengecutan IQ GPT-5.5: Mengapa AI Termaju Tidak Lagi Dapat Mengikut Arahan MudahAINews has uncovered a growing pattern of capability regression in GPT-5.5, OpenAI's most advanced reasoning model. MultOpen source hub3037 indexed articles from Hacker News

Related topics

human-AI collaboration45 related articles

Archive

May 2026787 published articles

Further Reading

Pelaksanaan Jarak Jauh Dipercaya: 'Kunci Peraturan' yang Menjadikan Ejen AI Selamat untuk PerusahaanRangka kerja baharu bernama Trusted Remote Execution (TRE) mengubah cara ejen AI beroperasi dengan menerapkan penguatkuaSembilan Arketip Pembangun Didedahkan: Ejen Pengekodan AI Mendedahkan Kecacatan Kerjasama ManusiaAnalisis terhadap 20,000 sesi pengekodan sebenar menggunakan Claude Code dan Codex telah mengenal pasti sembilan corak tKesesakan Tersembunyi dalam Penulisan AI: Mengapa Penyuntingan, Bukan Penjanaan, Menentukan KualitiModel bahasa besar menjadikan penulisan mudah, tetapi artikel berbantukan AI terbaik bukanlah hasil penjanaan sekali gusApabila AI Menamakan Dirinya dan Menjadi Pengarang Bersama Buku: Fajar Agensi KreatifSeorang usahawan manusia telah mengambil AI sebagai 'Pegawai Operasi Strategik,' memberinya hak untuk menamakan dirinya

常见问题

这次模型发布“NVD Overhaul and Claude Hype Fade: Why AI-Ready Vuln Management Demands Human-AI Symbiosis”的核心内容是什么?

The National Vulnerability Database (NVD) has entered a period of structural transformation, moving away from a static, human-curated list of Common Vulnerabilities and Exposures (…

从“How to integrate NVD real-time API with existing vulnerability management tools”看,这个模型发布为什么重要?

The NVD restructuring is not merely a data format upgrade; it is a fundamental architectural shift from a batch-oriented, human-in-the-loop database to a streaming, API-native intelligence layer. Historically, NVD data w…

围绕“Claude vs GPT-4 for vulnerability patch generation safety comparison”,这次模型更新对开发者和企业有什么影响?

开发者通常会重点关注能力提升、API 兼容性、成本变化和新场景机会,企业则会更关心可替代性、接入门槛和商业化落地空间。