Share with Your Network
As security professionals, we’ve all been in a situation where we’ve been presented with a large list of vulnerabilities in our systems, but we have limited time and budget to address them. In that situation the next question often becomes “How should I go about prioritizing these?” It can be very challenging to answer that question when we don’t have clear data about how our application will be attacked.
It was with the “how” question in mind that Jeremiah Grossman, CEO of Bit Discovery, and I sat down to discuss how we can better enable that decision in a recent webinar entitled “Why All These Vulnerabilities Rarely Matter”—also available on demand.
It’s clear that the number of CVEs has grown rapidly since 2017. In fact, that there already far more CVEs than any application security team could possibly keep up with, and this doesn’t include any custom products built on this software, of which there are undoubtedly many.
This trend reinforces the need for prioritization of “what” to fix, and you’re not always trading off security fixes. Should we ship this feature or fix this “potential” vulnerability? This is a familiar and sometimes painful conversation for security teams. So how should we decide what to fix?
In the Prioritization to Prediction series, we’ve reported that only a small percentage of the tens of thousands of vulnerabilities released every year are detected as exploited in the wild. Why? We asked our audience to help us hypothesize, and interestingly the vast majority theorized that the reason is that the vulnerabilities were not exploitable, or were too difficult to exploit in practice:
So if we accept the premise that attackers aren’t finding, exploiting, or even caring about most of these vulnerabilities, as we can infer from our poll and the aforementioned data, then the value in discovering them or even looking at them in the first place becomes questionable, right? How much coverage do we need in order to ensure that we cover the set of vulnerabilities which will eventually be exploited?
And, more importantly, how can we best identify these yet-to-be-exploited vulnerabilities? In a perfect world we’d use all the technology and techniques available—there’s a wide array of technologies and tooling for application teams in a variety of categories: Traditional Vulnerability Assessment (VA), Static Analysis (SAST), Software Composition Analysis (SCA), Dynamic Analysis (DAST), Bug Bounty and Manual Analysis.
But what if we had a limited budget and were just looking at SAST and DAST, as many security teams are forced to do? We took some liberties in this part of discussion, as there’s limited data about the overlap of vulnerability discovery across these two technologies, and even Kenna’s data is limited in this respect today—we welcome research in this area. Informal conversations within the community lead us to believe that only a small percent (5-15%) of unique vulnerabilities are found by both SAST and DAST.
So coming back to our original “how” question now that we have a set of vulnerabilities from these two technologies, we can ask which of these sets of vulnerabilities will eventually be exploited by attackers? We spent some time discussing where that set should go and eventually ended up putting it over a set of vulnerabilities that DAST found with some (smaller) amount of overlap with SAST.
While this conversation may seem to pit these technologies against each other, the reality is product teams need both to be successful, and both are crucial to visibility. It’s apples and oranges, or maybe in the case of our images, blueberries and grapes.
SAST is often used by security teams for the general health of the application, giving them an understanding of churn and relative code quality. These are key metrics when determining where to place forward investments – education and training initiatives. DAST is often better at finding and triaging immediately exploitable vulnerabilities that are already in production. DAST is often the first investment to ensure there are no “low-hanging fruit” for attackers as the application is put into production. **
So where do we go from here? Prioritization is the key to successful vulnerability management in infrastructure and product security. If we’re to drive prioritization based on attacker behavior, using sources of attacker behavior is key, and while we can approximate by choosing technologies that are appropriate for our goals, the more data we all have to understand attacker behavior in order to prioritize, the better.
We concluded the webinar talking about the temporal nature of prioritization and that our prioritization decisions today may no longer be appropriate tomorrow. Determining what is an acceptable level of risk depends on information that can and so often does change tomorrow. Effective prioritization requires a dynamic system where you’re constantly looking at what attackers are doing and then reprioritizing regularly.
**Aside: After the webinar, I thought a bit more about where we placed the “Vulnerabilities Exploited By Adversaries” dot and realized that we needed to adjust it downward in a significant way. DAST and SAST simply won’t find all the vulnerabilities that will eventually be used, and are often just a starting place for manual testing or bug bounty. Technologies like SCA are also crucial to be able to identify exploitable CVEs in any given product. More data required.