Share with Your Network
A Tale of Two Uncertainties
There are fields where precision is of the utmost importance. In fields of exploration (physics, chemistry, arguably mathematics), we attempt to seek out the truths of the world around us, to get better and better models of what’s going on. In fields of manufacturing (chocolate making, farming, engine casting) precision matters because it produces better products.
What do these fields have in common? From where I sit, behind the terminal and on top of a wall of statistics, the least common denominator is that they have well defined basic data units. Mathematics has axioms and proven theorems, Chemistry has the periodic table, Physics has quarks, chocolate has cocoa beans, and engine casting has alloys and sandmolds.
Information security is nowhere close to those. Uncertainly lurks around every corner, it is inherent in every problem. Because at its core infosec is a game between attackers and defenders, and information is always intentionally imperfect. In this post I will discuss a far more terrifying type of uncertainty – data definitions.
Data Fundamentalism
Data analysis is a language. At it’s most useful it is a way to communicate complex findings to those who don’t have time, skills, or access to the same information that the analyst does. And much like any language it requires static, basic definitions. The first problem is well known: our definitions are uncertain and sometimes late to the game. The second is worse: we don’t understand the source(s) of the uncertainty.
For anyone working on vulnerability management or predictions (excuse the lack of ‘Big Data’ usage here), this second problem is huge. As @katecrawford writes in her recent HBR article The Hidden Biases in Big Data, “data fundamentalism” has become a pervasive problem. This is the notion that predictive analytics and well-crafted correlations reflect the objective truth. Here’s the relevant sample:
CVE is NOT the Periodic Table of Elements
I work with vulnerabilities. There are a few places that define vulnerabilities, but CVE is the most universal set of definitions that I have to work with. Yet thinking of CVEs as elements on the periodic table is a grave mistake. Before creating synthetic polymers (read: useful analytics) out of these elements, we need to understand the biases and sources of uncertainty in the definitions themselves.
For example, take a look at this finding from a research team at Concordia University in their 2011 paper Trend Analysis of the CVE for Software Vulnerability Management:
There are many such papers out there; it is frightening to think they might guide organizational decision-making. This type of analysis misses the boat on what is being analyzed; it takes CVE to be analogous to the Constitution for legal scholars. An increase or decrease in their frequency, or the types that are being published over a time bucket can have wildly varying biases.
Let’s dive into some of them. The aforementioned HBR article also alludes to this: there’s no such thing as raw data. People working in data today need to take a clue from the less quantitative disciplines and take a look at where the data originates and how it’s gathered.
A Brief History of the Time: From @SushiDude to Today
CVE is a dictionary of known infosec vulnerabilities and exposures, and it is intended as a baseline index for assessing the coverage of tools. It is not intended as a baseline index for the state of infosec, as the aforementioned paper takes it.
Let’s start with this: the Wikipedia page is dead wrong. At this year’s RSA, I wanted to delve deeper into the exact process of CVE creation, and sought out @SushiDude, the father of CVE. Here’s the story:
Looking at the volume of CVEs seems to suggest that steadily increasing CVE disclosures mean ‘the state of security is getting worse, or some such poorly structured inference.
However, this is not a dictionary. This is a company with limited resources, attempting to streamline a process. This is easily seen when looking at the rate of disclosures over time. Note the changes in process cause a reduction in throughput first, then an increase. (Actually, this leads me to believe there was a change in the process in 2011 as well)
Their objective, from my conversation with them, is to increase throughput. This makes perfect sense – inform the public about as many standardized vulnerabilities, as their resources will allow. However, in this process lies the essential biases inherent in the basic units of vulnerabilities:
1. Descriptions – Some vulnerabilities are inherently more difficult to describe. Some attack vectors entail chaining of two, three, even five different weaknesses in an application. Analysts can publish five other vulnerabilities in the time it takes to write up a complicated chained one.
2. Categorization – A CVE is meant to exist independently of the multiple perspectives on that vulnerability. In the submission process, a whitehat might find a vulnerability, a vendor might submit the same one, or a third party may discover it as well, all assessing it differently. The more of such sources, the harder it is for analysts to standardize this information. For another great argument about how the process of CVE categorization influences statistics, see this OSVDB post.
3. Prioritization – There’s a vast and unexplored sea of vulnerabilities out there, and limited manpower. So how does MITRE decide which of them to look at? The advisory board helps made decisions about which vendors to prioritize, and some vulnerabilities get left in the backlog. A nice phrase employed to describe this sea of backlog is the “php.golf.” The opportunity costs of working on all of those are missing a Windows vulnerability, and so they stay put. In fact, there are a few CVE Numbering Authorities which get (rightfully so) preferential treatment. Also note how low severity vulnerabilities rarely enter the picture unless throughput is at high level (i.e. most of the high and medium severity stuff has been taken care of).
Here’s a little light reading to prove to you this isn’t make believe, from the CVE internal mailing lists:
And if you’re truly interested, dive deep into the link above to see all the inner workings of how a CVE comes to be. Regardless, it’s always good to know exactly what you’re working with.