SIRAcon Attendees, Start Your Engines
“Information is the oil of the 21st century, and analytics is the combustion engine.” – Peter Sondergaard, SVP Gartner
This week I attended SIRAcon in Seattle, a conference hosted by the Society of Information Risk Analysts. I spoke about the methodology behind Risk I/O’s “fix what matters” approach to vulnerability management, and how we use live vulnerability and real-time breach data to build the model, as well as why such a model performs better than existing CVSS-based risk rankings. However, there were a few persistent themes between the many qualified and excellent speakers at the conference. Combining and implementing these practices is not a simple matter, but organizations should take note, and as an industry, information security can evolve.
1. This is not our first rodeo.
Risks are everywhere – and other industries not that different from ours have caught on. Ally Miller’s morning keynote discussed the structured, quantified way in which fraud detection teams are built: starting with real-time data collection, updating of large, global models which guide decisions about fraud, and the ability to make those decisions in real-time. This requires clever interfacing with business processes and excellent infrastructure, but it’s been done before, and needs to be done with respect to vulnerability management as well. Alex Hutton used Nate Silver’s latest book on Bayesian modeling to raise some parallel questions about infosec. He drew analogues to seismology and counter-terrorism, and the maturity and relative similarity of those fields (large, often hard to quantify or observe risk) is something for us to explore as well. Lastly, his talk raised a healthy discussion on the differences between forecasting and prediction. A prediction describes the expectation of a specific event (“it will rain tomorrow”), whereas a forecast is more general, and describes the probability of a number of events over time (“there will be 2 inches of rain in december”, “there is a 20% chance of rain over the next day”). Largely, the discussion was focused on how management perceives differences between the two. In seismology, we fail at prediction because the mechanics are obfuscated, and so we can only forecast. The same seems to be largely true of infosec.
2. Good models need good data.
Adam Shostack from Microsoft gave a very convincing closing keynote on the value of data-driven security programs. Running experiments targeted at collecting data will generate scientific tools and take the qualitative (read: fuzzy) decision-making out of risk management. The alternative is the status quo – reliance on policies or measuring organizational performance against standards, which is paramount to stagnation, which no one can say about our adversaries. He states that although almost all organizations have been breached, it is incredibly difficult to develop models of breaches, largely because global breach datasets are hard to come by. Not so! We’re hard at work incorporating new sources of breach data into Risk I/O – but he’s most certainly correct that this is a hard project for any single company to undertake. Adam concluded with a call for organizations to encourage better sharing of data (hear, hear), and this mirrored the sentiment of other talks (particularly Jeff Lowder’s discussion of why we need to collect data to establish base-rate probabilities) about the need for a centralized, CDC-like body for infosec data.
So let’s get some data. We’re already off to a pretty good start.