***POST UPDATED: May 24, 2017
If we’ve learned one thing in the past year, it’s that the market for ETL (extracting, transforming and loading data) is incredibly large. But the market for outsourcing all the headaches, anxiety and frustrations of data projects is orders of magnitudes larger… and very few are taking it on. To do so requires trust, a core value of ours, which, once gained, is an invaluable retention asset.
It’s a Catch-22 of sorts: No one can steal customers if the valuable proposition we offer is taking stress away from data projects, and being approached by a new vendor induces stress, thereby eliminating them as an alternative. We won't know how to offer value, however, unless we regularly engage with customers to determine satisfaction. This is all to say, we wouldn’t be where we are without the NPS (Net Promoter Score, a customer satisfaction metric).
In the beginning...
From the moment you sign your first customer, it's important to look for ways to assess how you're doing. The worst is getting to that point where someone is paying for a service, thinking you're delivering well on it, and then - BOOM! - the subscription is canceled. You could assess post-mortem, but by then, it would be too late.
Customer relationships are difficult because even if expectations are aligned, even if the scope of work is clearly dictated and the SLA (service level agreement) is clearly defined, _even if you are exceeding the agreed-upon scope, _your customer still may not be happy. Why is that?
The scary truth is that your customer may simply not need the work you are outputting. Or they do need the work you're outputting but they don't value it at the rate you're charging. Or they do need the work you're outputting and they do value it, but the value they derive out of it is dependent on something else entirely out of your control. Whatever it is, their dissatisfaction is going to come as a surprise if you're relying on the initial contract scope as the barometer of satisfaction. And the difficulty of customer retention—not to mention customer satisfaction—only increases as you add more customers.
NPS: The Customer Satisfaction Metric
One of the most common measures to gauge customer satisfaction on an ongoing basis is the NPS, or Net Promoter Score. Originally developed by Fred Reichheld while at Bain & Co., and currently used at established brands like AMEX and GE, the NPS represents the proportion of customers who would recommend your product/service less any who would actively discourage others from using it.
It works like this: Customers are asked on a scale of 1-10 whether they would recommend your company and then are split into three categories based on their response. Customers with a response of 9 or 10 are deemed "Promoters," 7-8 scores are deemed "Passives" (i.e. not feeling strongly enough one way or the other to give a meaningful rating), and scores of 1-6 are deemed "Detractors." Subtract the percentage of Detractors from the percentage of Promoters, and you’ve got your NPS. The highest possible score you can get is 100 (everyone is a Promoter) and the lowest is -100 (everyone is a Detractor).
The central premise of the NPS as a measure of customer satisfaction is that people will give a positive (but not necessarily honest) response to a performance survey simply because they want to be nice, but they will only stake their reputation with their peers and recommend you if they are truly satisfied.
That all being said...
If the perfect version of an NPS is a flashlight showing you exactly how to grow, an actual NPS is more like a diffused streetlight, softly showing which customers are still on the main path and which are about to disappear down the many dark, meandering alleys. It's not a magic wand and it's not one of the many tools that are popping up to try to predict exactly which customers are going to churn. These use lofty, slightly intimidating terms (Machine Learning! Predictive Modeling!) to obfuscate the fact that, more times than not, they're poorly automating a nuanced process and generally selling snake oil.
Because we focus on onboarding each customer deliberately, we develop a communication style that encourages frequent check-ins and temperature checks. Taking NPS readings is not an independent initiative or afterthought; talking to customers is simply how we deliver our product, and there’s no way to divorce frequent interaction from Net Promoter Scores.
Introducing… NPS as a Leading Indicator
That's one way our philosophy of machines + humans complementing one another plays out. When we first started, we thought a completely automated SaaS platform is what we wanted to build. We saw that plug-and-play platforms were getting all those sweet VC dollars so we drank the Kool-Aid and built just that. Then we tried to add services since every customer's level of data maturity differed. Of course, that's not sustainable.
But through this type of experimenting and validating (NPS!), we were able to identify subclasses of customers within our core base and identify the different packages that satisfied each of them. In data science terms: We implemented a heuristic k-means cluster analysis as a means of naive feature extraction.* In other words, we took what we were doing in our "randomized" experiments (each customer’s unique experience with us) and looked for commonalities between our highest NPS scores. There was no art or magic to this. It just took agreeing to do a lot of different things for a lot of different customers and then seeing what they responded to. Ultimately, it lead us to what we are now: a platform for data engineering that collects, processes and unifies enterprise data, so companies can get straight to analytics, data science and—more importantly—insights.
Now, we plan to continue experimenting on what we’ll deliver depending on the demands of each customer. And we’ll gauge NPS scores frequently to see not what just works but what makes our customers truly satisfied, so we can use it to drive our product roadmap, just like it drove our product.
*If you know what that those phrases mean, you know that this is kind of a stretch. Ok, it's a pretty jerry-rigged description not corresponding to any real methodology … but it does, at face value, accurately describe what we did.