Authored by: PlexTrac Team Posted on: April 23, 2026 NVD’s New Prioritization Model Means Security Teams Need a Better Way to Prioritize Risk For years, many vulnerability management programs have treated NVD enrichment as a foundational layer of triage. CVSS scores. Product mappings. Weakness classifications. Reference links. Standardized context. That enrichment has helped security teams take a raw CVE and turn it into something they can route, prioritize, and explain. But that model just changed in a meaningful way. On April 15, 2026, NIST announced a new risk-based operating model for the National Vulnerability Database. Going forward, NVD will prioritize enrichment for a narrower set of CVEs, including those in CISA’s Known Exploited Vulnerabilities catalog, software used within the federal government, and critical software as defined by Executive Order 14028. CVEs outside those categories will still be listed, but may be categorized as lower priority and not scheduled for immediate enrichment. NIST also said it will no longer routinely provide a separate NVD severity score for every submitted CVE. This is a practical response to a scale problem. NIST said CVE submissions increased 263% between 2020 and 2025, and that submissions in the first three months of 2026 were already nearly one-third higher than the same period last year. NIST also said it would move backlogged CVEs published before March 1, 2026 into a “Not Scheduled” category under the new workflow. The bigger implication for security teams is straightforward: If your triage model depends on NVD enrichment arriving quickly and consistently for the long tail of CVEs, that dependency just became much riskier. The problem is bigger than NVD capacity It would be easy to frame this as a staffing or process issue. It is more than that. The vulnerability ecosystem has outgrown the assumptions many programs still operate on. There are simply too many disclosures, too much software churn, and too much attacker innovation for a centralized enrichment model to remain the default answer for every record. NIST’s own explanation makes that clear: the agency is changing the operating model because the submission volume has surged past what the old model could sustainably support. That means the issue is not just whether some CVEs are slower to enrich. It is that many organizations have quietly built triage logic, workflows, and reporting assumptions on top of enrichment data they no longer control. When severity bands, lifecycle states, SLA rules, or routing logic depend too heavily on NVD-generated context, the program becomes brittle. Missing scores create blanks. Delayed enrichment creates lag. And severity-first prioritization becomes even less reliable when the source data is partial or absent. Severity-first triage was already showing its limits Even before this change, mature programs were moving away from severity-only decision-making. That is because severity alone has never captured the full operational picture. Two vulnerabilities with the same CVSS score can represent very different business risk depending on exploitability, exposure, reachability, compensating controls, asset criticality, identity context, internet accessibility, and evidence of active use by attackers. NIST’s shift just makes that limitation harder to ignore. If the market gets less standardized enrichment for a larger share of CVEs, then the teams that continue to rely on static score-driven queues will lose even more precision. The answer is not to abandon external vulnerability intelligence. It is to stop treating it as the sole or primary operating layer for prioritization. What this means for security leaders This is where the conversation moves from data quality to operating model. Security teams do not need another reminder that vulnerability volume is rising. They need a way to decide what matters when standardized enrichment is incomplete, delayed, or missing. That requires a model built around context, correlation, and action. Instead of asking, “What is the CVSS score?” the better questions become: Is this asset internet-facing? Does it support a critical business process? Is it tied to an attack path? Is there evidence of exploitation in the wild? Does it recur across the environment? Is it reachable in this environment? Has remediation stalled? Does this issue connect to broader exposure themes across teams? Those are not just vulnerability management questions anymore. They are exposure management questions. PlexTrac Helps Teams Prioritize Beyond Enrichment When teams can no longer rely on one upstream source to provide consistent enrichment for every CVE, prioritization has to become more contextual and more operational. That means security teams need a way to bring together vulnerability findings, pentest results, validation data, asset context, business criticality, remediation ownership, and workflow status so they can make decisions based on what is actually true in their environment, not just what is available in a database. PlexTrac helps organizations move beyond static, severity-first triage by connecting findings to the broader context around them. Instead of treating prioritization as a one-time intake exercise, teams can continuously evaluate risk based on what has changed, what is reachable, what has been retested, what remains exposed, and which issues matter most right now. In practice, that means teams can use PlexTrac to unify findings across scanners, assessments, and offensive security workflows; add the operational and business context needed to prioritize more effectively; and track the path from discovery to remediation and validation in one place. PlexTrac is not a replacement for the CVE ecosystem. It is the operational layer that helps teams act when external enrichment is incomplete, delayed, or inconsistent. The next step is not more enrichment. It is better context. NIST has made clear that the NVD is not going away. The new model is intended to stabilize operations while NIST builds more scalable automation and workflow improvements, and users can still request enrichment for lower-priority CVEs as resources allow. Security teams can no longer assume that every CVE will arrive with fast, comprehensive, centrally managed enrichment. That means organizations need to rethink how prioritization works from the logic layer up, not just where severity comes from, but how vulnerability intelligence is combined with asset context, exposure evidence, business impact, and remediation reality. In other words, this is a move away from vulnerability list management and toward operational exposure management. That shift matters because the problem is no longer just vulnerability volume. It is whether teams have the context and workflow needed to make good decisions when the old enrichment assumptions no longer hold. Questions security teams should be asking now If you are evaluating the impact of this change, start with a few practical questions: Which parts of our triage logic depend directly on NVD enrichment? What happens when CVSS, CPE, or CWE fields are missing or delayed? How are we prioritizing CVEs that may never receive immediate enrichment? Are we correlating vulnerability data with asset criticality and real environment context? Can we track remediation and validation without relying on static severity alone? Do we have a unified way to connect findings across scanners, pentests, and remediation workflows? As vulnerability volume keeps rising, the real question is whether your program can still prioritize effectively without the enrichment model it was built around. PlexTrac Team Editorial Group At PlexTrac, we bring together insights from a diverse range of voices. Our blog features contributions from industry experts, ethical hackers, CTOs, influencers, and PlexTrac team members—all sharing valuable perspectives on cybersecurity, pentesting, and risk management.
Vulnerability Management in the Age of AI: From Data Overload to Decisive Action By Sean Martin and Marco Ciappelli, Co-Founders of ITSPmagazine Between the 300-page pentest PDF and the spreadsheet no one is updating, security teams lose the thread. Findings pile up, priorities blur, and the key question — are we actually getting safer? — goes unanswered. That is the problem Daniel DeCloss set out to solve when... READ ARTICLE
RSA Takeaways on AI, Exposure Management, and Execution As I’m heading back from RSA, I’ve had a little time to decompress and think about what stood out most from the week. Like every RSA, it was full. Booth conversations, customer meetings, partner catchups, walking too much, talking too much, and trying to make sense of where this market is actually headed underneath all... READ ARTICLE
The AI Arms Race – Why Unified Exposure Management is becoming a Boardroom Priority Over the past year, I’ve noticed a shift in the conversations I’m having with security leaders. It’s no longer just about more vulnerabilities or more tools but speed and how difficult it’s becoming to keep up. The cybersecurity landscape is accelerating at an unprecedented rate. We are witnessing the dawn of a new era in... READ ARTICLE