Adopting A Real-Time,
Data-Driven Security Practice

How Security Practitioners Can Prioritize Vulnerability Scanning Data to Make Intelligent Decisions that Minimize Risk

Introduction

A typical organization today may have millions of vulnerabilities across its networked infrastructure and applications. However, just a few vulnerabilities are responsible for most successful Internet breaches. Vulnerability volume, density, or a score assigned last year is not nearly as important as the actual, real-time risk that a given vulnerability poses to an organization. Security teams don’t just need the raw vulnerability scanning results, but also need to know which vulnerabilities constitute a real threat. In this white paper, we’ll identify how an organization can utilize their data to make intelligent decisions that minimize risk to applications and infrastructure.

Security Is Now A Data Problem

The challenge security teams face managing and remediating security defects has evolved in recent years. Basic vulnerability scanning is no longer the challenge. Organizations face a number of issues as part of their vulnerability management programs, not the least of which is data management. Mature security teams are assessing risk across all of their asset layers including their applications, databases, hosts and networks. Any group dealing with a sizable environment isn’t struggling with finding security defects, but rather with managing the mountain of data produced by their vulnerability assessments, penetration testing, and threat modeling in order to fix what’s most important first.

Being able to effectively, quickly, and transparently use limited resources to address an overwhelming amount of security flaws is the present challenge. A May 2013 survey of more than 200 organizations worldwide finds that 40% of organizations are overwhelmed by the vulnerability scans & other security data they collect1. 83% of organizations surveyed with more than 1,000 employees collect more than 50GB of log data daily. While the raw data alone is daunting, what is even more daunting is the number of potential resolutions to each vulnerability. Namely, vulnerability management has been the essential unsolved problem in security for years2.

Current risk assessment methodologies do not fit real “in the wild” attack data. Why fix it if it’s not broken? Because it’s quite broken. Mauricio Velazco, the head of vulnerability management at the Blackstone Group drives the point home in an article in which he explained "We have to mitigate risk before the exploit happens. If you try to mitigate after, that is more costly, has more impact, and is more dangerous for your company."3 Current prioritization strategies based on the Common Vulnerability Scoring System (CVSS), and subsequent adaptations of such scores, have two fatal flaws:

Flaw 1: Current Risk Assesment Lacks Information About What Types of Attacks are Possible

Allodi and Massacci, researchers at the University of Trento in Italy find that only 2.4% of vulnerabilities in the National Vulnerability Database have actual attacks logged in the Symantec Threat Exchange4. In 2011, security researcher Dan Guido analyzed the vulnerabilities exploited by the top exploit toolkits used by attackers and found that only 27 of the possible 8,000 vulnerabilities released over two years were actually included in the kits5.

Flaw 2: Current Risk Assesment Methodologies Lack a Real-Time Component

Attackers do not go after the same vulnerability month after month, week after week, hour after hour. If certain types of attacks are failing, they change strategies6.

Data-driven-security-figure1@2x
Figure 1: Data Breaches by CVE Publication Year

Data gathered from breached vulnerabilities of 20,000 organizations worldwide illustrates the constantly shifting nature of attack patterns. Attackers change which types of vulnerabilities they exploit daily – but current risk assessment strategies are based on CVSS scores, which are assigned sometimes years before an organization makes the unwise decision to patch or forego that vulnerability. Risks change in real-time. Your risk assessment methodology should be real-time as well.

Real-Time, Data-Driven Security - Easier Said Than Done

So how does an organization utilize this plethora of data to make intelligent decisions that minimize risk to applications and infrastructure? Several things must be done in order to make this information valuable and actionable:

Action 1: Correlate and Clean Vulnerability Scanner Data

Weeding out the false positives and identifying false negatives takes quite a bit of work. False positives need to be removed from VA results by testing out potential exploits while using multiple data sources to flag potential false negatives. Once the security team has a degree of confidence in it’s result set, the next step is to begin the correlation process. When best of breed solutions are used for each layer of your vulnerability management solution, you’ll often run into the same vulnerability multiple times identified by different sources. Additionally, you may have multiple vulnerabilities such as a SQL injection vulnerability flagged on multiple fields of the same form, which may only require one fix by a developer. Chalk it up to more time for your security analyst to weed through the data.

Action 2: Correlate Between Disparate Data Sources, Gather Context

Of course, mapping these assets and defects is only the first step in understanding the risk. The team also needs to understand the value of their assets (and asset groups) and use a risk scoring and ranking system against the identified vulnerabilities. When aggregated and interpreted appropriately, these data points can highlight defects which may have been overlooked otherwise. This can give the organization a contextual understanding it previously did not have. While the low hanging fruit is usually straightforward to address, it’s taking this to the next step that becomes the needle in the haystack problem for security teams. Done properly, this could mean many hours dedicated to data mining VA, penetration testing, and reviewing result sets. Having worked in the manual trenches of security defect data for some time, we’re looking to solve many of these problems through automation and security data intelligence.

Action 3: Relate Asset Groups/Types of Risk to Each Other

Often times to understand the risk currently exposed by a given platform, you’ll need to map all of your assets for the platform together along with their related security vulnerabilities. In other words, a web application is made up of an entire stack of assets including a custom developed application, off-the-shelf software, backend databases, servers, and network devices. Mapping these assets together can give security and the management team a better view into the overall risk of a platform and allow some insight into how adjacent vulnerabilities may be increasing that risk.

Identify And Fix Real Threats In Your Environment

Kenna's Risk Meter is an asset-based risk assessment methodology that quantifies the probability that a specific vulnerability on a specific asset will be exploited. It can be applied to an entire infrastructure, or to any subset of assets specified by the user. Software engineers select technologies based on threat expectations, or their anticipation of attacks, but there is rarely statistical data to support these expectations7.

The Risk Meter includes near real-time breach and threat data, continuously scans Exploit Database, Shodan and Metasploit for new exploits, and calculates the popularity of vulnerabilities in the wild. By intelligently correlating all of this data to your vulnerability scanners and environment, the Risk Meter solves all of the challenges facing a security team in one fell swoop, all while providing your organization with actionable, quantitative metrics about your risk posture.

Vulnerability data is correlated and cleaned automatically. Through our threat processing process, regardless of the scanning technology or number of scanners, Kenna de-duplicates vulnerability data and maps it to assets. Additionally, false positives are weeded out with automated Metasploit tests, and naturally occurring false negatives between scanning technologies are automatically fixed.

All Of Your Threat Intelligence And More In One Place

The Risk Meter uses a proprietary vulnerability and asset scoring algorithm which relates your vulnerability data to near real-time breaches across 20,000 organizations, trends in vulnerabilities across 2 million assets, and freshly updated Exploit Database, Metasploit and Shodan data. This ensures that the way in which you prioritize remediation decisions is reflective of real threats, timely, and effective.

The Risk Meter allows simple comparisons between asset groups. Whether it’s all the assets in San Francisco, or all the Windows XP SP2 machines in your environment, or everything inside the DMZ, each asset grouping poses different risks, and often has different vulnerabilities. Proprietary scoring algorithms allow an easy comparison between asset groups, no matter how large or small. Since Risk Meter scores are augmented with the threat data mentioned above, Kenna predicts which vulnerability on every asset in your environment is the most likely to be exploited. This is done as new data comes in, whether it’s from closing a vulnerability or a threat feed being updated, which equates to painless real-time risk assessments. Observing risk meters across different asset groups allows you to not only compare the risk these groups pose, but also to track emerging threats in real-time, as well as benchmark progress.

The Risk Meter allows an organization to harness the true promise of financial risk analysis within their security practice. Recent research from Carnegie Mellon University explains why current models for financial risk analysis work poorly in security, and how to fix the problem:

Much of the mismatch between security technology data and financial analysis methods arises from the fact that the security technology data is expressed on ordinal scales ("X is more effective than Y") but the analysis methods are designed for data expressed on a ratio scale… general approaches to resolving this discrepancy between the data and the analysis tools:

1. Increase the precision of the data enough to convert the qualitative rankings to quantitative measures
2. Find analysis techniques that require less precision
3. Demonstrate that the analysis technique of interest preserves the relations of the qualitative data8

The Risk Meter uses all three of the prescribed methodologies in order to make complex, variable security data easily accessible and quantitative.

Methodology 1: CVSS and other CVSS-Based Scoring Systems Are Not Granular Enough

While the scoring itself might have 100 possible outcomes (0.0-10.0), only about 20 of these numbers are possible given the way that these scores are constructed. In fact, they are only useful in making “low” “medium” or “high” decisions, sometimes also introducing a “critical” ranking9. Patrick Toomey of Neohapsis Labs puts it best when he writes, “We don’t measure football fields in inches for a reason.” In football, that reason is that nobody cares about the “third down, three yards and two inches to go.” The information is simply irrelevant, and we don’t need the granularity to assess the situation. In security, we do need precision, but standard scoring mechanisms don’t offer enough of it.

Conversely, compare the CVSS score distribution to the Risk Meter score distribution amongst the same vulnerabilities. Using breach, exploit, and scaled CVSS data, we are able to create a Gaussian (normal) distribution of scores, to which statistical methods can be applied for risk analysis.

Data-driven-security-figure2@2x
Figure 2: CVSS vs. Risk Meter Score Distributions


Methodology 2: Take the Research Out of Vulnerability Assessments

The Risk Meter combines your vulnerability scanners, Exploit Database, Metasploit, Shodan, and NVD with our threat partners’ data feeds to get a quick, real-time assessment of how likely a vulnerability is to be breached. This allows analysts to spend their time making well-informed decisions about which remediations need to happen next.

Methodology 3: The Risk Meter Compares Apples-to-Apples, Risks-to-Risks

Specifically, by computing the riskiest vulnerability on an asset in real-time, as well as averaging across asset groups, we can compare any two groups of assets against one another. Using other methodologies, organizations would often compare qualities of asset groups, instead of comparing risks. For example, a group of hostnames might have a higher vulnerability density than a group of URLs, but that’s because they are different classes of assets. One specific vulnerability on one of those URLs might be the golden ticket for an attacker today – and without a proper comparison that would get lost in the slew of data. The Risk Meter allows you to compare the relative risk posed by different asset groups.

Data-driven-security-figure3@2x
Figure 3: Risk Meter Comparison by Common Groups

Keep in mind that this is data from over 700,000 live assets across over 2000 organizations. The averages are statistically significant, each category involves thousands or tens of thousands of assets. The ease in comparing the risk an abstract organizational concept poses to your organization to the risk a piece of technology poses is apparent.

Conclusion

Kenna's Risk Meter allows security teams to effectively, quickly, and transparently use limited resources to address an overwhelming amount of security flaws. By correlating external Internet exploit and breach data with varied vulnerability data, Kenna shows you not only which exist in your environment, but those which constitute a real threat. Our Risk Meter vulnerability and asset scoring system enables relevant, real-time patching and remediation decisions while providing both security professionals and upper management with quantitative ways to assess both security risk and performance.

Citations

  • [1] EMA, The Rise of Data Driven Security, http://www.enterprisemanagement.com/research/asset.php/2278/The-Rise-of-Data-Driven-Security
  • [2] Anton Chuvakin. On Vulnerability Prioritization and Scoring, http://blogs.gartner.com/anton-chuvakin/2011/10/06/on-vulnerability-prioritization-and-scoring/
  • [3] Robert Lemos. Securing More Vulnerabilities By Patching Less, http://www.darkreading.com/vulnerability/securing-more-vulnerabilities-by-patchin/240162177
  • [4] Luca Allodi and Fabio Massacci. How CVSS is DOSsing your patching policy (and wasting your money). Presentation at BlackHat USA 2013.
  • [5] Luca Allodi. The dark side of vulnerability exploitation. Proceedings of the 2012 ESSoS Conference Doctoral Symposium. link [PDF]
  • [6] Michael Roytman. Stop Fixing All The Things, http://blog.kennasecurity.com/2013/08/stop-fixing-all-the-things-bsideslv/
  • [7,8] Butler, Jha, and Shaw. Carnegie Mellon University When Good Models Meet Bad Data: Applying Quantitative Economic Models to Qualitative Engineering Judgments, ftp://ftp.cs.cmu.edu/project/vit/pdf/good-bad-data.pdf
  • [9] Patrick Toomey. CVSS – Vulnerability Scoring Gone Wrong, http://labs.neohapsis.com/2012/04/25/cvss-vulnerability-scoring-gone-wrong/

Michael Roytman

Michael Roytman is Kenna's Data Scientist, responsible for building out Kenna's predictive analytics functionality. He formerly worked in fraud detection in the finance industry, and holds an MS in operations research from Georgia Tech. In his spare time, he tinkers with everything from bikes to speakers to cars, and works on his pet project: outfitting food trucks with GPS.

Kenna

Kenna is a vulnerability threat management platform that processes external Internet breach and exploit data with an organization’s vulnerability scan data to monitor, measure and prioritize vulnerability remediation across their IT environment. As a result, organizations know their likelihood of experiencing a breach and what vulnerabilities pose the greatest risk. Kenna processes over a billion vulnerabilities a month against Internet breach data for its users. Kenna is used by over 800 companies, including multiple Fortune 500 companies and two from the Fortune 10. Backed by US Venture Partners, Tugboat Ventures, Costanoa Venture Capital, and Hyde Park Angels, Kenna is headquartered in Chicago, IL.

24/7 Threat Processing
Kenna matches real-time Internet breach traffic with vulnerabilities from your environment every 30 minutes showing you the most critical vulnerabilities to remediate. Knowing which vulnerabilities put you most at risk gives you the insight to prevent breaches before they occur.

Measure Risk Across Your Organization & Know What to Fix First with the Risk Meter
Create your own Risk Meter Dashboard to monitor your critical systems. Risk Meters showing “red” have vulnerabilities with known Internet breaches. Use the Risk Meter Dashboard to help make informed business decisions on where to invest in security.

Know the Vulnerabilities Putting You Most at Risk
Kenna's asset tags allow you to customize reporting, access, remediation and dashboards with meta data customized to your business. Asset tags can be used to identify teams, business units, network segments, compliance scope, geography, custom applications or anything else you dream up. If it’s important to you, we allow you to measure it.

Get Through Scanner Data Faster Using Faceted Search
Our faceted search capabilities enable you to power through stacks of information generated by your scanners and our threat processing engine to find exactly what you are looking for when you need it.

Give us 5 minutes & 1,000 assets
We’ll Give You a Full Picture of Your Risk