Good, Better, Best: What Matters in Vulnerability Remediation
Never cared for what they say
Never cared for games they play
Never cared for what they do…
Forever trust in who you are
And nothing else matters.
— NOTHING ELSE MATTERS, METALLICA
We are well underway in our ongoing research with the Cyentia Institute on the patterns and trends of modern vulnerability management practices. For the sake of getting right to the good stuff (read: the latest research results), I’ll link to my earlier posts with previous results at the end of this blog and anyone who’s new to this discussion can catch up at their leisure. With that, let’s get to it.
In this latest volume of the series we take the next step in our research by looking at the outcome metrics (hard data) from volume 3 and then asking the top performers “why?” (soft data). We ended up looking at many of the softer characteristics of these top performers vulnerability management programs to try and correlate them to positive or negative outcomes. Additionally we looked at different classes of performance knowing that some organizations may opt for speed or coverage while others may be more focused on efficiency or remediation capacity.
In essence, we want to know what is it, exactly, that high-performing organizations do differently so that we can duplicate those characteristics and collectively get better at remediation. Read the report for the full findings, but here are a few results I found most interesting and/or surprising:
- Maturity matters: Interestingly, those organizations that gave their own vulnerability management (VM) program high marks for maturity do in fact perform better. Those companies had strong remediation performance across almost all measures, seeing a significant correlation with better coverage, velocity, and capacity to address the vulnerabilities in their environments. No Dunning-Kruger effect here!
- Structure & budget are key: Enterprises that had VM responsibilities split between different internal organizations cut their average time to remediate vulnerabilities by a month and a half and were less likely to be falling into vulnerability debt. In addition, those who felt their budget for remediation was adequate did, in fact, remediate faster. Neither of these findings felt all that surprising as they seem reflective of organizational maturity, with those more mature organizations owning better budget and disparate centers of security and IT.
- Patch management tools to the rescue: Unsurprisingly, companies that employed centralized patch management tools over a majority of their infrastructure addressed 20% more high risk vulnerabilities, had a 10% increase in accuracy targeting the riskiest vulnerabilities, and are able to handle 22% more vulnerabilities than ones who did not. However, I was surprised to see reliance on auto-update does not have a statistically significant impact on performance.
- Prioritization in Kenna scores: It was great to see those customers who were using our Risk Meter to prioritize remediation scored significantly better than others in many performance metrics including remediation velocity. The chart below shows the vulnerability half life, which is a measurement of how many days it takes to remediate half of any cohort of vulnerabilities. As you can see, those using Kenna to prioritize took about 68 days to remediate half of their vulnerabilities while those using other prioritization methods took roughly 114 days to get through the same amount.
- Cloud does not impact the view: Interestingly, and surprisingly for me, the percentage of cloud assets under management do not have a strong correlation either positively or negatively to remediation velocity. There are any number of reasons that could impact this result (and I have several hypotheses that you can hear more about in my webinar with Ben Edwards from Cyentia discussing the research). Further analysis may well be warranted.
I recommend reading the full report, but if you just want the highlights of what you can take from these results to make your own program more efficient and effective, we’ve compiled the following chart. The larger the dot, the more impact an action had on that measure of performance, and red is a negative impact while blue is positive.
I hope that this research and our continued efforts help us all get better at remediation.
Catch up on my previous posts on earlier research volumes:
Volume 1: Vulns Will Survive