Analyze Customer Data to Improve Your Customer Experience
Share with Your Network
John Sculley was a young marketing executive at Pepsi-Cola in the 1970s when he was tasked with knocking Pepsi’s perennial competitor, Coca-Cola, off its perch in the soft drink business. To achieve this, Sculley needed to gather customer data, though back then, it was an analog process. He sent researchers into consumers’ homes to track habits and capture preferences, only to discover a desire for larger servings beyond the iconic 6.5-ounce Coke bottle or even Pepsi’s own 12-ounce swirl bottle. Analyzing that data led to the first plastic two-liter soda bottle.
In the 1990s, psychologist Paco Underhill armed graduate students with clipboards and deployed them in retail stores to collect behavioral data on how consumers shopped. The resulting insights have helped shape the modern in-store retail experience, such as specifying aisle widths needed to prevent the sales-killing “butt-brush effect”—Underhill’s term for the all-too-real fear of one’s hiney making contact with someone else’s—and optimizing the racetrack layout that subconsciously leads shoppers to browse the entire store.
This was cutting-edge stuff in the analog age. But companies today, especially cloud software providers, don’t have to stake out stores, post up in customer homes, or even be physically present in data centers to understand how people and enterprises use their products.
Cloud software providers already have that data, or at least they should. After all, half of all corporate data is stored in the cloud. And by the very nature of their business, cloud providers are constantly monitoring usage, and they’re hopefully analyzing it in ways that help them run their business better. Doing so gives them a treasure trove of objective, analytics-ready information that can reveal usage trends, success benchmarks, and more. This data can help software providers optimize products and services to deliver a better customer experience.
A real-world use case
As you’d expect, the particulars of data-driven insights will vary by company, industry, and use case. Within the vulnerability management industry, usage habits largely vary based on the maturity of a customer’s vulnerability management program. Those who are relatively new to a formal VM program likely focus on reducing the total number of known vulnerabilities in their environment. This may be the most basic of VM metrics. A more mature program will focus on overall risk reduction, to the point where remediation teams compete to see who can achieve the lowest risk scores. The most mature environments operationalize their VM programs around risk so everyone is always on the same page. This clarifies priorities, creates a self-service environment for remediation personnel, and reduces the kind of friction that often arises between Security and IT. All good stuff.
At Kenna Security, our customer usage data informs the ways we help our customers move along that maturity curve. We’re able to analyze how nearly 500 enterprises manage more than 12.7 billion vulnerabilities affecting 14 million active assets. Over the years, that analysis has shown us how organizations improve their ability to meet certain VM KPIs, such as coverage and efficiency. It gives our Customer Success team insights into where customers may need extra help to determine the prioritization strategy that works best for them. We use it, in other words, to improve their customer experience.
The proof is in the product…
How that data shapes the Kenna customer experience depends on the nature of the findings. For instance, we discovered through data analysis and some in-the-field Paco Underhilling (hey, I can make up terms too) that most organizations set up arbitrary remediation SLAs, assuming they maintain SLAs at all. They are established with little more than assuming so-called “critical” vulns should be remediated sooner, say 30 days, than lower-risk vulns, which might be assigned 60- to 90-day SLAs.
We used those insights to develop an industry-first feature called risk-based SLAs. This feature in Kenna.VM allows organizations to set and maintain remediation SLAs based on the unique risk tolerance of their organization. Simply put, if your organization (a financial institution, say) has a very low tolerance for risk, the SLAs you set for remediating the vulns Kenna.VM determines to be a significant risk to your unique environment is likely to be pretty tight. If your tolerance is higher, the picture will be different, and so will your SLAs. Kenna’s Customer Success team has developed best practices we use to help customers determine their risk tolerance so they can set (and meet) SLAs that make sense to them.
…and in P2Ps
What’s different about Kenna is we don’t keep these insights to ourselves. Working with the big brains at Cyentia Institute, we take this real-world, anonymized data and analyze it through various lenses. The result is our Prioritization to Prediction (P2P) report series, which we publish twice a year with Cyentia. This ongoing research and data analysis effort have resulted in a series of revelations about the Kenna customer experience and what it means for organizations’ ability to reduce risk:
- Organizations work hard to shrink their attack surface. Overall, they remediate 70% of higher-risk vulnerabilities. But the remediation efficiency rating is just 16%, indicating firms opt to fix many low-risk issues that could have been ignored or delayed. That’s where the value of a risk-based vulnerability management solution becomes apparent because the research has revealed that just 2%-5% of CVEs are both exploited in the wild and observed in enterprise environments. This is where Security and IT teams should place their focus. (P2P, Vol. 2)
- Enterprises that split VM responsibilities between different internal organizations (for instance, Security to identify, prioritize and manage vulnerabilities, and IT and DevOps to handle remediation) cut their average time to remediate vulnerabilities by a month and a half. They’re also less likely to fall behind. (P2P, Vol. 4)
- Security executives who feel their remediation budget is adequate actually remediate faster than those who say they’re underfunded. More mature organizations—in part defined by their extensive use of a risk-based vulnerability management platform—own better budgets and disparate centers of Security and IT. (P2P, Vol. 4)
- Even the best-resourced organizations can only remediate one out of every 10 vulnerabilities found in their environment, so focusing on high-risk vulnerabilities is the only effective way to protect data and assets. Interestingly, smaller firms tend to fix vulnerabilities faster than their medium-sized and large counterparts. (P2P, Vol. 3)
- Top-performing organizations remediate over twice the number of vulnerabilities at a rate three times faster than the norm. Using risk-based vulnerability management strategies, one-third of organizations actually manage to gain ground against the persistent onslaught of new vulnerabilities and exploits—not a small achievement. (P2P, Vol. 3)
- It’s common to use a 30-day window for patching vulnerabilities. Those following that guidance might like to know that initial exploitation in the wild was detected within that 30-day window for about half the vulnerabilities we’ve studied. So a 30-day window for vulnerabilities known to present a significant risk to your organization is not a bad SLA timeframe, though more customized, risk-based approaches to SLAs are available. (P2P, Vol. 6)
- Organizations that use the Kenna Risk Meter to prioritize remediation scored significantly better than others in many performance metrics including remediation velocity, or the time it takes to close a specific percentage of vulnerabilities. Those using Kenna to prioritize took about 68 days to remediate half of their vulnerabilities while those using other prioritization methods (CVSS scores, scanner-based fix lists, or just vendor patches—VM strategies seen as “good enough”) took roughly 114 days to remediate half of their vulnerabilities. (P2P, Vol. 4)
That final data point reminds me again of John Sculley. His research also revealed Coke drinkers proudly poured their soda in front of guests, while Pepsi drinkers kept the bottles hidden in the kitchen. Pepsi drinkers, whether they realized it or not, appeared to believe their favorite cola was good enough for them, but not for their guests.
When it comes to cybersecurity, security professionals who favor CVSS scores or scanner-based solutions to prioritize their vulns should probably think about moving beyond “good enough” vulnerability management. Eventually, they’ll be asked to serve the results of their efforts to C-level execs and board members. And with cybersecurity stakes as high as they are today, those execs may well conclude “good enough” vulnerability management isn’t anywhere close to good enough.
It’s all in the data—and eventually, it finds its way into the customer experience. At least, that’s how it works here at Kenna. And whether you prefer Coke or Pepsi, the data shows that everyone should just drink more water.
Learn more about what Kenna Security got right when it created the risk-based vulnerability management category.