Negative Seven Days: When CVEs Become a Trailing Indicator

David Westcott
May 11, 2026

The case for treating CVEs as a signal, not THE signal.

In Mandiant's 2026 M-Trends report, the average time between a vulnerability being publicly disclosed and being exploited in the wild is now an estimated negative seven days. Negative. Exploits in production environments before the CVE is published.

That number was 63 days in 2018. It crossed zero last year. The CVE pipeline, which most security tools still treat as ground truth, is now a trailing indicator of risk.

This post is about what changes when you stop assuming a CVE has to exist before you act on a vulnerable piece of software.

How we got here

The CVE program was introduced by MITRE in 1999 with 321 records. For its first 16 years it produced under 10,000 entries annually, all routed through MITRE and a small set of vendor authorities. Two changes broke that cadence.

The first was structural. In 2016, MITRE expanded the CNA (CVE Numbering Authority) program beyond vendors to include CERTs, projects, and bug bounty platforms. CVE volume jumped 109% the following year, from 8,673 to 18,141.

The second is happening now. Frontier and security-specific AI systems including Glasswing's Mythos, Google DeepMind's Big Sleep, OSS-Fuzz, OpenAI's Codex Security, and Claude Code Security are discovering vulnerabilities at a rate the CNA pipeline was never sized for. 2025 closed at 45,959 published CVEs, up 61% year over year. 2026 is on pace for roughly 70,000.

Figure 1: Published CVEs per year, 2016–2026. Source: VulnCheck 2026 Exploit Intelligence Report

The community has not been idle. Over the last 20 years a steady cadence of enrichment programs has tried to make this volume actionable: CWE, CVSS, NVD, CPE, EPSS, CISA KEV, NIST LEV. Each one was a real improvement at the time. None of them is now keeping pace with the supply.

Figure 2: Twenty years of CVE enrichments, from CVSS v1 (2005) to NIST LEV (2025).

NIST has already conceded the math. The NVD now enriches only CVEs that meet a narrow set of criteria, and has stopped working its existing backlog. Anything with an NVD publish date earlier than March 1, 2026 moves to a "Not Scheduled" category that will not be revisited. The richest dataset most vulnerability tools depend on is no longer comprehensive.

Add Mandiant's negative-seven-days finding on top of that, and the pattern is hard to miss: the supply of CVEs is exploding, the depth of metadata behind each one is shrinking, and the window between a vulnerability being disclosed and being exploited has closed.

Why most tools can't compensate

Almost every endpoint scanner, RBVM, EDR, and ASM platform follows the same sequence:

  1. Deploy a sensor.
  2. Inventory installed applications and versions on the endpoint.
  3. Build CPE strings from that inventory.
  4. Query CVE, KEV, and threat intel sources for matches.
  5. Alert on hits.

Every step in that chain assumes a CVE exists. When a CVE doesn't exist yet, when it exists but hasn't been enriched, or when the vulnerability is in a misconfiguration, an exposed secret, or a behavior pattern no CNA has filed, the chain produces nothing. Not because the tools are bad. Because the architecture starts and ends with the CVE.

What changes when behavior is the primary signal

Spektion uses CPE-to-CVE matching too. We treat it as one input, not the foundation. The foundation is runtime behavior: what the software actually does on the endpoint once it starts executing.

That shift does three things a CVE-first architecture can't.

It catches risky software that hasn't been disclosed yet. If a piece of software exhibits the runtime characteristics that correlate with exploitable conditions (excessive privileges, network listeners on unexpected ports, suspicious file system or process behavior, embedded components with known weaknesses), we flag it. No CVE required.

It catches risky software that won't ever have a CVE. Misconfigurations, exposed credentials, unmanaged browser extensions, AI agents and MCP servers running on user endpoints, unsanctioned coding assistants. Real exposure surface, no CNA on the case.

Figure 3: Eight runtime weaknesses observed for DAEMON Tools Lite, none with a corresponding CVE. The kind of exposure CPE-to-CVE matching cannot see by design.

It removes the dependency on enrichment latency. When a CVE is eventually published for software we already classified as high-likelihood, our customers don't need to wait for NVD enrichment, EPSS scoring, or threat intel correlation. The risk verdict already exists.

A recent example

On February 6, 2026, Spektion's runtime engine flagged an unquoted-path execution pattern in MobaXterm (Professional and Home Editions, versions prior to 26.1) as high-likelihood, high-impact. Customers running it were notified the same day. VulnCheck reserved CVE-2026-25866 that same day, confirmed Mobatek's notification on February 9, and saw the patched version 26.1 ship on March 6. The CVE was published on March 9. Spektion Research's public write-up followed on March 11, 33 days after the runtime alert.

As the disclosure put it, "an attacker with local filesystem write access can drop a malicious executable at the appropriate path location." The risk verdict our customers were operating against was already in place more than a month before that sentence appeared in print.

Figure 4: Spektion console view of the CVE-2026-25866 finding in MobaXterm, surfaced via runtime weakness detection ahead of public disclosure.

The retrospective

Looking back at the most-referenced CVE disclosures of the last several years, a consistent pattern shows up: behavioral signals were detectable on customer endpoints before the CPE-to-CVE pipeline produced a match. The gap was sometimes hours and sometimes weeks. Some of the highest-profile examples of the last decade (EternalBlue / WannaCry / NotPetya, ProxyLogon, PrintNightmare, Log4Shell, Follina, the Outlook NTLM relay, the PAN-OS GlobalProtect command injection, the SharePoint ToolShell deserialization, the recent Windows kernel race-condition LPE, and the Fortinet FortiVoice stack buffer overflow) all carried runtime behavior that runtime telemetry can detect, even where standard EDR coverage misses or arrives late.

Figure 5: Retrospective comparison across notable CVEs from the last decade. Runtime telemetry catches most cases where standard EDR coverage misses or detects after the fact.

CVEs are still useful. They're just not enough.

The CVE program isn't broken. It's the same thing it always was: a public catalog of disclosed vulnerabilities, maintained by humans, now scaled past the level the original architecture was sized for. As an input, it remains valuable.

As the only input, it commits security teams to a model where they're always reading yesterday's news. Time-to-exploit numbers say that's a model that's already failed.

Runtime exposure data answers the question CVEs can't: of the software actually executing in this environment, which pieces are exposing this organization to risk right now, regardless of whether anyone has filed paperwork on it yet.

If the negative-seven-days number keeps getting more negative, and there's no reason to expect otherwise, the security teams that come out of this transition cleanly are the ones that stop waiting on the pipeline and start watching the software.

See what's exploitable in your environment, whether there's a CVE for it or not.

Request a demo.

Related Content