Crowdsourcing Vulnerability Intelligence

Dec 6, 2012
Kenna Security

Share with Your Network

This is the first post in our guest blogging series. If you are interested in writing for Risk I/O, visit our Guest Blogging page for more information.

Strictly speaking, crowdsourcing refers to a model for problem solving that depends on turning requests for information, service, or even ideas over to an unknown but reachable group of potential participants to seek out results. In the aggregate, such participants—known collectively as “the crowd”—respond to such requests, and often provide valuable information about their local conditions and circumstances, in addition to whatever kinds of explicit responses may be forthcoming.

While the user communities for innumerable instances of security software may not necessarily view themselves as part of “the crowd,” very often those communities represent the bleeding edge of the front-lines in the ongoing race to keep up with security vulnerabilities that could expose them to outright attack, data leakage, or diminished performance or downtime as a result of successful exploits foisted upon them. Ironically, by reporting on what their security tools encounter and observe, even as some members of the crowd become subject to vulnerability, or fall prey to penetration or attack, such weak members of the herd provide essential information required to protect the entire crowd, and to create patches, fixes, or workarounds to mitigate what ongoing monitoring reveals about behavior and vulnerabilities “out there in the wild.”

From a mathematical perspective, something called the “law of large numbers” (often abbreviated LLN) helps to ensure that the more often a phenomenon is encountered and measured, the more precisely its outlines, characteristics, and consequences can be estimated. In general, the LLN promises that accuracy increases as the number of similar events goes up. For statisticians, the LLN is important because it provides a reasonable basis to predict stable long-term results from increasingly long series of random events. For those who observe large populations, the LLN helps create certainty that the longer a trend persists, the more likely it is to continue. Thus, not only the frequency of observations, but also the frequency at which they occur, and the way in which their geographical distribution moves and changes over time, is also terribly significant when monitoring and responding to security-related events, vulnerabilities, and exploits.

That’s why you’ll hear so many security service providers stress the size of their user bases, and their global reach and coverage. In fact, large populations permit wide ranges of events and behavior to manifest over time. This helps organizations that seek to mitigate and manage risk and threat to determine what kinds of risks are most likely to demand a response, and lets both service providers and their clients prioritize the order in which they must respond to security threats. This also explains why independent security monitoring and response teams are vital to security providers, because the frame of activity never stops; rather, the primetime window for business and work activity simply moves around the globe as it follows the clock on its daily path from time zone to time zone.

If it wasn’t for a large number of “sensors”—that is, the tools and consoles deployed at customer sites around the world—security providers wouldn’t be able to detect, prioritize, and respond to threats and vulnerabilities as quickly and efficiently as they currently do. The situation in the trenches is by no means perfect, as occasional rampant successes of zero-day exploits sometimes show, but it has proven to be a workable model in keeping risk and exposure to manageable (if not always entirely comfortable) levels. Thanks to all those users (and the LLN) constant observation, ongoing prioritization, and quick response makes vulnerability detection and reasonable response possible.

About the Author: Ed Tittel is a full-time freelance writer and researcher who covers information security, markup languages, and Windows operating systems. A regular contributor to numerous TechTarget websites, Tom’s IT Pro, and, and, Ed also blogs on Windows Enterprise Desktop and IT Career topics. His latest book is Unified Threat Management For Dummies. Learn more about or contact Ed at his website.

Read the Latest Content

Research Reports

Prioritization to Prediction Volume 5: In Search of Assets at Risk

The fifth volume of the P2P series explores the vulnerability risk landscape by looking at how enterprises often view vulnerabilities.

5 Things Every CIO Should Know About Vulnerability Management

If you view vulnerability management (VM) as just a small part of your operation, it might be time to take another look.  Managing vulnerabilities is...



Get Started Using the Exploit Prediction Scoring System (EPSS).

Cyentia Institute’s Chief Data Scientist and Founder Jay Jacobs gives tips on how to get started using the Exploit Prediction Scoring System (EPSS). You...

© 2022 Kenna Security. All Rights Reserved. Privacy Policy.