Stop Fixing All The Things – Our BSidesLV Talk

Aug 6, 2013
Michael Roytman
Chief Data Scientist

Share with Your Network

Last week at BSidesLV, Ed Bellis and I presented our view on how vulnerability statistics should be done. We think it’s a different and useful approach to vulnerability assessments.

Our contention is that the definitions of vulnerabilities in NVD and OSVDB are just that – definitions. As security practitioners, we care about which vulnerabilities matter. Much like looking at a dictionary doesn’t tell you anything about how often people use a common word such as “the,” a vulnerability database tells one nothing about which vulnerability trends matter. We solve this problem by using live, currently open vulnerabilities to do our assessments.

Since the slides are linked below and the recording is available here, in this blog post I want to give the quick-and-dirty summary without the theatrics, as well as point in some other interesting directions that this type of work could go.

What we had to work with:

1. 23,000,000 live vulnerabilities across 1,000,000 real assets, which belong to 9,500 clients.

2. 1,500,000 real breaches which occurred between June and July of 2013 and were correlated to 103 CVEs.

What we did:

1. Treated the two samples of breaches and vulnerabilities (incorrectly, but usefully) as coming from the same population.

2. Calculated the conditional probabilities of certain types of vulnerabilities being live-breached vulnerabilities.

Why it’s important:

While not statistically accurate, this is a great way to compare two policies for remediating vulnerabilities. Should you fix only CVSS 10? Or 9? Maybe you should patch everything. We try to answer the age old question: if you have $100, and really care about information security, how do you spend it?

What we found:

1. The best policy was fixing vulnerabilities with entires in both Metasploit and Exploit DB, yielding about a 30% success rate, or 9x better than anything CVSS gets to, and 15x better than random.

2. Randomly picking vulnerabilities gives one about a 2% chance of remediating a truly critical (that is, one that has observed breaches in the past two months) vulnerability.

3. Randomly remediating a CVSS 10 vulnerability gives you a 3.5% chance of fixing a critical vulnerability.

4. If your policy is one of fixing vulnerabilities in Exploit DB, you have a 13% chance of remediating vulns with observed breaches.

5. Metasploit only? 25% chance.

Here’s how it looks on paper:

 

What else we should all think about given this new information:

1. SushiDude and Attrition.org gave a cool talk at BlackHat about vulnerability statistics bias. A lot of it had to do with how people use definitions. What does everyone think about how bias in vulnerability databases affects our approach of using live vulnerabilities for statistics? My contention is that the bias is mitigated quite well. As a thought experiment, imagine a perfect vulnerability database, with every vulnerability ever discovered and no bias. Would looking at that vulnerability database help you decide what you should remediate next?

2. Frank Artes from NSS labs had an excellent talk at BsidesLV about which vulnerabilities blow past IDS and 2nd gen firewall systems. It was interesting to hear that 26% of metasploits go undetected by IDS systems. How can live breach statistics account for those undetected metasploit modules or worse, blackhat exploit kits? What can live vulnerability statistics do to increase vendor awareness about the missing metasploits (a hashtag if I’ve ever heard of one)? We hope to be able to illustrate which of the missing ones are top priority for _vendors_ to include in their detection packages based on the prevalence of live vulnerabilities susceptible to such exploits in the wild.

As always, your thoughts and feedback are very welcome. Stay tuned for more on breach/vulnerability correlation and the insights we glean from Risk I/O data. And you can follow us on Twitter @RiskIO.

Read the Latest Content

Research Reports

Prioritization to Prediction Volume 5: In Search of Assets at Risk

The fifth volume of the P2P series explores the vulnerability risk landscape by looking at how enterprises often view vulnerabilities.
DOWNLOAD NOW
eBooks

5 Things Every CIO Should Know About Vulnerability Management

If you view vulnerability management (VM) as just a small part of your operation, it might be time to take another look.  Managing vulnerabilities is...
DOWNLOAD NOW

Videos

Videos

Get Started Using the Exploit Prediction Scoring System (EPSS).

Cyentia Institute’s Chief Data Scientist and Founder Jay Jacobs gives tips on how to get started using the Exploit Prediction Scoring System (EPSS). You...
READ MORE
FacebookLinkedInTwitterYouTube

© 2022 Kenna Security. All Rights Reserved. Privacy Policy.