Why Vulnerability Scores Can’t Be Looked at in a Vacuum
Share with Your Network
Sometimes a number is just a number. Context – the information and environment around the number – is what really matters. This concept holds especially true in vulnerability management and risk scoring.
This is one of the important lessons I drew from a recent conversation I had with a customer. In truth, I’ve had versions of this discussion many times over the years, and it usually starts with a new vulnerability making headlines.
In this case, the new vulnerability involved a Microsoft Defender bug, CVE 2021-1647.
The conversation focused on a difference in our risk score for this CVE compared to its CVSS score. In this case, we scored CVE 2021-1647 at 51 on a 100-point scale. CVSS scored it at 7.2 on a 10-point scale.
In theory, a score of 51 on a 100-point scale would seem like it sits in the middle of the risk distribution, while the CVSS score sits somewhere between medium and high risk. But when you look a little closer at the risk scores of these vulnerabilities compared to all other vulnerabilities, you get a different picture. A Kenna Risk Score of 51 means that just 4.6 percent of all vulnerabilities stack higher.
With a 7.2 CVSS score, nearly 30 percent of vulnerabilities are scored higher.
In other words, our scoring system recognizes that there are far fewer vulnerabilities that are more dangerous than CVE 2021-1647 when compared to the CVSS scale.
As you can see from the two charts above, the CVSS scale skews heavily rightward, whereas the median vulnerability in Kenna’s system is 32. Our scoring system reflects a relatively simple truth: for approximately 80 percent of vulnerabilities, there is no published exploit. In other words, the risk profile for the majority of vulnerabilities is low.
Simply put, the raw number of a scoring system can’t be looked at in a vacuum. What does matter is the distribution of the ranking system, and where a vulnerability sits in that distribution. Security teams are constantly weighing resources, uptime, and a whole host of other factors when considering what and when to patch. Considering most organizations have finite resources for patching vulnerabilities, that relative placement is extremely effective for the prioritization of mitigation tasks.
Let’s take that one step further — we shouldn’t even limit the importance of a risk score to its location on a distribution curve.
Vulnerabilities affect assets, applications, and entire networks. If you are thinking about what vulnerability to fix next, the risk score is important. But you should also ask yourself how an exploitation will affect the asset it sits on. Does it hold financial information? Is it part of a regulated environment? How many users would be affected?
There’s a lot of context to consider when measuring risk. We can’t treat vulnerability scores like a punch list. The more we can put these scores into context, the safer we can make all environments.