We use cookies for analytics and error tracking.

Back to blog
Dr. Philip Empl

How Manufacturers Prove Their Products Have No Relevant Vulnerabilities

Product SecurityVulnerability ManagementCyber Resilience ActIndustrial SecurityObserver

1. How to Prove That Industrial Products "Have No Vulnerabilities"

Let's start with the uncomfortable truth: You can practically never prove that a product has zero vulnerabilities. What you can very well demonstrate (and what auditors, customers, and the CRA de facto expect) is something much more concrete:

At time X, there are no known and relevant vulnerabilities in the components used, or they have been assessed, mitigated, and documented.

That is a robust, verifiable proof. And that is exactly what this is about.


2. Starting Point: CVEs as the Common "Language" for Vulnerabilities

2.1 What Is a CVE?

A CVE (Common Vulnerabilities and Exposures) is a standardized identifier for a known vulnerability, e.g., CVE-2024-12345. This allows everyone (manufacturers, customers, CERTs, authorities) to unambiguously refer to the same vulnerability.

Important: A CVE does not automatically tell you:

  • whether your product is affected,
  • whether the vulnerability is exploitable,
  • how likely an attack is.

A CVE is initially just "an entry." The assessment comes afterward.

2.2 What You Need Before You Can Even Talk About CVEs

To meaningfully check CVEs, you need a clean inventory of your product components. In practice, these are:

  • SBOM (Software Bill of Materials): Libraries, packages, versions, dependencies
  • HBOM (Hardware Bill of Materials): Hardware components, modules, firmware versions

Without this list, you cannot properly demonstrate that you have "checked." At best, you have hope.


3. Step-by-Step Approach for a Traceable Proof

The proof becomes robust when you build it like a pipeline: identify -> verify -> prioritize -> document.

3.1 Step 1: CVE Matching (Am I Affected?)

Goal: For each component from the SBOM/HBOM, check whether there are known CVEs.

The result of this step is not "secure," but rather a list:

  • CVE found, but unclear if affected
  • CVE found and affected
  • No CVE found (for this component)

Important for the proof: You document your data basis (BOM/SBOM), the point in time, and the sources from which you pulled CVEs.


3.2 Step 2: "Is This Really Critical?" - Checking CISA KEV and ExploitDB

Once you've compiled a list from CVEs, the decisive filter comes: What is realistically exploitable or is being actively exploited?

a) CISA KEV (Known Exploited Vulnerabilities)

The CISA KEV list is essentially shorthand for: "These vulnerabilities are actually being exploited in the wild." If a CVE appears there, it is a strong signal for high urgency.

b) ExploitDB

ExploitDB is a collection of publicly available exploit examples. If an exploit exists there, the risk increases significantly because an attack is less "theoretical."

Pragmatic rule:

  • CVE in CISA KEV: highest priority
  • ExploitDB exploit available: very high priority
  • Neither present: then you need a probabilistic assessment (next step)

(Observer uses exactly such sources as its foundation, including CVE, ExploitDB, and CISA KEV.)


3.3 Step 3: When There Is No "Hard Evidence" - EPSS (FIRST) as a Risk Estimate

Not every vulnerability is listed in KEV. Not every one has an exploit on ExploitDB. Yet it can still be relevant. This is where EPSS comes in: the Exploit Prediction Scoring System by the FIRST organization.

What EPSS Does (Explained Simply)

EPSS provides a probability estimate of how likely it is that a vulnerability will actually be exploited in the near future. It's not an oracle, but a very practical prioritization lever.

Important: EPSS does not replace expert assessment. It helps you turn "a thousand CVEs" into an order that can actually be worked through in day-to-day operations.


4. The Key Trick: Defining Your Own Threshold for "Relevant"

When you say "proof," you need a clear, traceable rule by which you decide. Otherwise, it's arbitrary.

4.1 Why a Threshold?

Without a threshold, one of two things typically happens:

  • You treat every CVE equally and drown in work.
  • You ignore too much and have no defensible argument.

A threshold is the middle ground: clear rule, reproducible, auditable.

4.2 Example of a Pragmatic Policy

You can define an internal logic such as the following:

A. Always treat as critical (regardless of EPSS):

  1. CVE is listed in CISA KEV
  2. CVE has an exploit in ExploitDB
  3. CVE directly affects exposed components (e.g., remote interfaces, auth, network services)

B. Otherwise EPSS-based:

  • Define an EPSS threshold above which you treat a CVE as "prioritized."
  • Everything below is documented but not necessarily fixed immediately, provided compensating measures are in place.

What matters is not which specific value is "correct." What matters is:

  • The value is documented
  • It is consistently applied
  • It is justified (e.g., resources, exposure, product type, customer requirements)

4.3 What the Proof Looks Like in Practice

Your proof ultimately consists of:

  • Product inventory (SBOM/HBOM, versions, timestamps)
  • List of discovered CVEs including source checks (KEV/ExploitDB/EPSS)
  • Decision per CVE (affected/not affected, fix/mitigation/accepted)
  • Measures and status (patch, config, workaround, update plan)
  • Approval/review (who decided, when, why)

This is what "no vulnerability problem" means in practice: no unresolved, unmanaged risk above your policy threshold.


5. Common Pitfalls (So You Don't Fall Into the Usual Traps)

  1. "No CVE found" does not mean "secure." It only means: nothing is known in your data sources.

  2. Versions are often the problem. SBOMs are only as good as the version accuracy. With industrial products, versions, forks, backports, and OEM variants are typical.

  3. Affectedness is not binary. "CVE present" does not equal "exploitable." Exposure (network), configuration, and usage are what decide.

  4. Proof without a process is a one-time snapshot. Auditors and customers want to see that you do this regularly, not just once before the deadline.


6. How the Observer Supports This Process

The process described above sounds logical but is brutally manual: collecting data, matching, consolidating, prioritizing, documenting. This is exactly what the Observer is designed for.

6.1 What the Observer Does at Its Core

Observer provides a structured, consolidated view of vulnerabilities and risks across your products and turns it into a workflow-ready process.

Specifically, the approach follows a clear sequence:

  1. Import BOMs (software and hardware, including relevant identifiers)
  2. Detect and consolidate vulnerabilities
  3. Prioritize what truly matters
  4. Remediation and evidence management (decisions, status, evidence)

6.2 Which Sources Observer Uses for Classification

Observer builds on established data sources and standards, including:

  • CVE as an identifier system
  • ExploitDB as an exploit signal
  • CISA KEV as an "actively exploited" signal
  • Additional relevant data sources and formats for advisory and vulnerability information

This gives you exactly the signals you need for your proof: not just "there is a CVE," but also "is it realistically exploitable and prioritized?"

6.3 What This Means for the Proof

With Observer, "someone googled it once" becomes a robust, repeatable proof:

  • Inventory and vulnerability status are linked
  • Prioritization can be based on clear signals (KEV/Exploit/EPSS policy)
  • Decisions and measures are documented traceably
  • The whole process is repeatable across the product lifecycle

That is exactly the difference between an assertion and a proof.

Ready to scan your product?

Cyber Resilience Act

September 11, 2026.

From then on, product security is a legal obligation. No evidence, no CE marking.

176
Days
:
15
Hrs
:
27
Min
:
37
Sec

Knowledge alone isn't enough — action counts.

Put insights into practice. Complioty makes product security actionable.