Must-Have Metrics for Vulnerability Management: Part 2

Mar 30, 2016
Ed Bellis
Chief Technology Officer, Co-founder

Share with Your Network

This blog is Part 2 in a 3-part series on Must-Have Metrics for Vuln Management. Read Part 1 here.

Must-Have #2: Know Your Business

In order to understand the most pertinent threats and measure the likelihood of exploits, you really need to understand these factors within the context of your business. A great way to apply this knowledge to security is through threat modeling. There are numerous methodologies for threat modeling out there, ranging from the heavyweight to the “back of the napkin.”

But don’t worry about starting with something highly sophisticated: there are significant advantages to be realized by just going from nothing to something. By knowing even some of the most basic attributes about your business and applications, you can begin to understand what are the most likely threats. Some simple examples include:

  1. What broad metadata do you have about the organization? Your industry vertical, size, and geography all play a role here.
  2. What information is processed by your application or assets, and what are the value and/or regulations around such information?
  3. How many people use an application? This is an often overlooked but critical attribute.
  4. What and where are the key assets to your business? Are there critical controls for these assets to protect confidentiality, integrity, or availability?
  5. Who are your adversaries and what are their capabilities? Occam’s Razor applies here.

Some useful metrics here include:

  1. System Susceptibility
    1. Value to Attackers
    2. Vulnerabilities
  2. Time to Compromise (Hacker economics): How long would it take to compromise any of the key controls for these assets and applications?
  3. Threat Accessibility
    1. Access Points and Attack Surface
  4. Threat Capability
    1. Tools
    2. Resources

Does your threat model include Alexa ratings? As an example of this, take two applications. One is used by your internal employees for HR and contains sensitive information such as social security numbers, health care information, etc. The other is a public application that contains no sensitive information and all data is public.

If you don’t evaluate the size of your user base, it’s obvious that the application processing sensitive data far outweighs the risk of the public application. However, if the public application has 100 million users—and these users can be attacked through a persistent cross-site scripting vulnerability in the application allowing a malicious user to attack 100 million users—does that change things?

Must-Have #3: Know Your Risk

Counting vulnerabilities and relying on static scores no longer works; these methods don’t account for the fact that threats change constantly and there needs to be a tried-and-true methodology in place to measure real security risk. If you build your process on top of a broken risk model, you end up in a riskier position faster. Taking a risk-based approach over quantities and an “effort complete” method is a must.

In order to understand your risk, at a minimum you’ll need a handle on both likelihood and impact. There are many factors that go into such a methodology. Some questions to ask include:

  1. Asset metadata: Fed by the first two “must haves” areas, do you understand who owns the asset, what the function of the asset is, and how it’s used? What’s the impact of losing the confidentiality, integrity, or availability of the asset? Which is the most important based on its function?
  2. Vulnerabilities: What are the weaknesses and vulnerabilities tied to this asset or group of assets? How easy or difficult is it to exploit these weaknesses?
  3. Threats: What are the threats associated with the security holes as well as to your business? How skilled is your adversary and what skills are required to exploit your weaknesses? How prevalent are these vulnerabilities being exploited in the wild? Are you likely to be hit by a “drive by”?

Most importantly, there should be a single score that unifies the entire environment, based on real-time exposure to risk. From there, it’s possible to move down into other important asset groups and categories.

You also need to be able to track your progress over time. Think of exposure to risk like a stock report. What was your exposure last week—and last month? Has the risk line trended down or up? What’s your “high” and “low” point over the past 52-weeks?

Showing your team’s ability to reduce exposure to risk over time is a critical component of arguing for a team’s efficiency, ability and—at the end of each year—the need for additional budget and headcount.

Some useful metrics here include:

  1. Risk by asset group both current and trending over time
  2. Mean-time-to risk reduction where risk reduction is a target or goal
  3. Time to remediate high risks broken down by asset groups

We still have two more Must-Haves coming up. Stay tuned for Part 3…

Read the Latest Content

Research Reports

Prioritization to Prediction Volume 5: In Search of Assets at Risk

The fifth volume of the P2P series explores the vulnerability risk landscape by looking at how enterprises often view vulnerabilities.
DOWNLOAD NOW
eBooks

5 Things Every CIO Should Know About Vulnerability Management

If you view vulnerability management (VM) as just a small part of your operation, it might be time to take another look.  Managing vulnerabilities is...
DOWNLOAD NOW

Videos

Videos

Get Started Using the Exploit Prediction Scoring System (EPSS).

Cyentia Institute’s Chief Data Scientist and Founder Jay Jacobs gives tips on how to get started using the Exploit Prediction Scoring System (EPSS). You...
READ MORE
FacebookLinkedInTwitterYouTube

© 2022 Kenna Security. All Rights Reserved. Privacy Policy.