Best Practices = Vanity Metrics

Mar 21, 2013
Ed Bellis
Chief Technology Officer, Co-founder

Share with Your Network

After recently reading a post from Gary McGraw at Cigital arguing for software security training, I became a bit frustrated with cited “evidence” and posted this out on Twitter and received a short follow up from Lindsey Smith over at Tripwire…

Now let me say upfront, I have a lot of respect for Gary and his work AND actually agree with him on the subject of software security training. I’ll get into the why I agree with him in a bit. That said, here’s where my frustration comes in. Gary references the BSIMM as evidence that software security training works. Evidence? I find the BSIMM interesting but it leaves the taste of vanity metrics in my mouth. For those of you not familiar with the term vanity metrics, Eric Ries talks about them a lot as part of The Lean Startup:

“Actionable metrics can lead to informed business decisions and subsequent action. These are in contrast to “vanity metrics” – measurements that give “the rosiest picture possible” but do not accurately reflect the key drivers of a business. Vanity metrics for one company may be actionable metrics for another. For example, a company specializing in creating web-based dashboards for financial markets might view the number of web page views per person as a vanity metric as their revenue is not based on number of page views. However, an online magazine with advertising would view web page views as a key metric as page views as directly correlated to revenue.”

I think the BSIMM and best practices within information security often fall under the definition of vanity metrics. There are things I like about the BSIMM and it’s a great start but only focuses on one half of the data. Telling me what many companies are doing for their security controls becomes a lot more interesting when you also tell me how those controls faired over time. I would love to see the BSIMM and other models like it evolve into an evidence-based set of controls. Today, they certainly should not be cited as evidence that any control within them works as we’re completely missing that side of the picture. This is also not a post to pick on BSIMM but rather an attempt to call out our industry citing best practices without evidence.

I mentioned earlier in this post that I actually agree with Gary on software security training. The reason I can say this is based on evidence, not best practices. At my former employer, we implemented a number of measurements around application defects and specifically security defects. We also did various software security training exercises both internally and with help. As part of this we measured things like defect rates and density within specific groups both before and after. We continued these measurements over time and saw material drops in most categories. Was it completely due to training? No, but we saw a measurable impact each time that correlated with a specific set of training. It’s evidence similar to that I’d like to see combined with a set of “Best Practices.” At best, best practices are a set of things that others *may* be doing; at worst they are meaningless vanity metrics.

Read the Latest Content

Research Reports

Prioritization to Prediction Volume 5: In Search of Assets at Risk

The fifth volume of the P2P series explores the vulnerability risk landscape by looking at how enterprises often view vulnerabilities.
DOWNLOAD NOW
eBooks

5 Things Every CIO Should Know About Vulnerability Management

If you view vulnerability management (VM) as just a small part of your operation, it might be time to take another look.  Managing vulnerabilities is...
DOWNLOAD NOW

Videos

Videos

Get Started Using the Exploit Prediction Scoring System (EPSS).

Cyentia Institute’s Chief Data Scientist and Founder Jay Jacobs gives tips on how to get started using the Exploit Prediction Scoring System (EPSS). You...
READ MORE
FacebookLinkedInTwitterYouTube

© 2022 Kenna Security. All Rights Reserved. Privacy Policy.