How to fight churn with data science
The answer will surprise you. Over the last few years I have analyzed churn at more than 50 companies, in my roles as Chief Data Scientist at Zuora. What I have found is the way to stop churn with data science doesn’t involve using the latest AI technology... What works best is a return to a traditional notion of “science”: test some hypotheses about subscribers and churn and then communicate the results to the people who actually prevent churn: product and content creators, marketers and customer representatives.
To many younger data scientists this may come as a surprise because they have come of age in a time when the hype cycle is completely infatuated with neural networks and other black box products and technologies. So much so that being a data scientist is practically synonymous with deploying black box predictive systems. But AI is not advanced to the point where it can perform high value / high risk tasks like calling customers and designing email campaigns: By and large those jobs still belong to account managers, marketers, and customer support and success representatives.
1. Predicting versus preventing churn
In this post I’m going to share some bad news for all of my fellow data scientists and analysts out there:
1.1 Predicting churn is hard.
Usually (hopefully) churn is rare in comparison to continuation of a subscription, so churn is what you call a rare outcome in data science lingo. As a result, false positives will be common with the best predictive algorithms.
It's easy to see why predicting churn is hard, and is prone to false positives: Consider your own behavior the last time you unsubscribed from something… You probably were not taking full advantage of the subscription for months, but it took you that long to cancel because you were too busy or uncertain. If a churn warning system was observing your behavior during that time it would have flagged you as a risk every month and been wrong every time. Until The right moment to cancel came, that is, but that moment was determined by too many extraneous factors to be predicted with high precision.
1.2 Preventing churn is even harder than predicting churn.
Preventing churn is what’s really hard. If you think about it, every subscription has a cost that must be outweighed by a benefit delivered. If the cost outweighs the benefit, churn is just a matter of time. The cost may be a concrete transactional amount, or it can simply be the attentional cost of a subscription to a free service such as a game or YouTube channel (is it worth the space it takes in your subscription feed?). This means that in order to prevent churn in a long term and reliable way, a company must actually move the needle on benefit delivered or cost incurred from using the service. This can be harder than getting people to sign up in the first place, because now they know what the product is actually like….
People have often asked me for “silver bullets” to reduce churn, and here’s the bad news: There are no silver bullets to reduce churn, if by a silver bullet you mean a low cost and reliable way that always works. In the words of the great startup CEO and venture capitalist Ben Horowitz, “There are no silver bullets for this, only lead bullets.” Meaning you have to do the hard work of increasing the value you provide to subscribers. Either that or reduce the cost, which is the nuclear option for a paid service - revenue churn or downsells may be better than complete and total churn, but it’s still churn. (The downsell is a “diamond bullet” against churn : it always works, but you can’t afford it…)
2. Preventing Churn is a human job
There have been remarkable advances in AI and data science in the past years, but for the most part actually preventing churn is still something that has to be done by people who either a) make the product, service or content; or b) interact with customers. It varies by the type of subscription offering and organization, but generally speaking these are the people who prevent churn:
Product managers, content creators and producers reduce churn by making changes to product features or content offerings to maximize stickiness
Marketers reduce churn by crafting effective mass communication that directs subscribers to the stickiest content and features
Customer success and support representatives prevent churn by making sure customers adopt a product and help them if they can’t
Account managers are generally the last resort in stopping churn, assuming the product or service costs something: they can actually reduce the price or change the subscription terms.
From the point of view of the data scientist or analyst, these are the “customers” or “users” of the data analysis. At small organizations these may all be the same people or just one person, but that doesn’t change the question: what can data science do to really help people perform these tasks?
3. The real role of data science and analytics
As a result of all these reasons, the data science needed to reduce churn is not the kind of black box AI algorithms that get most of the attention in the media nowadays. Instead the real deal is more of a traditional scientific and statistical analysis. Predictions of churn can be useful, but not unless the prediction is the natural extension of a program of investigation and knowledge transfer from the data scientist or analyst to the product and customer teams.
So a data scientist working to help reduce churn needs to act more like a social scientist or economist than a computer scientist. The data scientist needs to test specific understandable hypotheses about the causes of churn, like what content is stickiest or which behavioral metrics are most closely aligned to value attainment. Many of these hypotheses should come from the product and customer teams, but a good data scientist should be able to guide the process, challenge assumptions, and uncover some surprises. Then all of this has to be translated into knowledge that actually helps the real churn preventers do their job.
This point of view is actually well known to people who invest large portfolios on Wall Street. No one trusts long term high value investments to black box predictive systems.* If you have to move money in and out of large positions it means long investment horizon and high transaction cost. Statistical methods are used to verify and quantify hypotheses that the decision makers already have about the markets, but not to make predictions the decision makers cannot see the reasoning behind.
Likewise for churn prevention, the value at stake is high - your company’s survival depends on it. And the cost of interventions can be high too. Poorly planned or executed interventions to prevent churns can be disastrous.
* It is common to use black box AI for high frequency systems making small trades that complete in seconds or less. In that scenario it is easier to halt a failing algorithm and course correct before much damage has been done.
4. Putting Data Science to Work
I point all of this out only because with so much hype around machine learning and black box AI technologies these days, inexperienced data scientists may not realize how inappropriate these approaches are for churn prevention. What is a data scientist to do?
4.1 Leave the Kaggle Mindset Behind
Data scientists and analysts have to stop thinking that accuracy on a predictive problem is the only metric that matters. This is a common attitude for academics developing algorithms on fixed benchmark databases, and it is amplified by competitions like Kaggle. However, this approach falls flat when the problem is a business decision with high stakes. Old fashioned hypothesis testing is the way to start.
4.2 Listening to the Business
Data scientists need to ask the business stakeholders what they are really interested in achieving, and just as importantly find out what hypotheses the business already has about the data they work with. The prior beliefs of people with deep domain knowledge is worth much more than any algorithm!
4.3 Talking to the Business
Data scientists need to answer the questions the business asks, not just apply algorithms. And the answers need to be in terms the business can understand. Black box models are usually disqualified, and a lot of statistical jargon also has to go. Teach the business the most important findings with simple charts or cohort analyses they can reproduce in Excel. Once the data scientist has gained the confidence of the business there is room for more advanced approaches, but it has to start with a solid foundation.
If you want to learn more about what I have found that really works, check out my website on this subject: fightchurnwithdata.com, or follow me on LinkedIn or Twitter.
... View more
Hi, this is Carl Gold, Chief Data Scientist at Zuora. Are you a data scientist, or helping to fill that role at your company? Let's get together at Subscribed and talk data! I love to hear how other subscription businesses wrestle with their data and analytices problems. Contact me through the community or email me directly (email@example.com) Looking forward to seeing you at Subscribed! Carl
... View more
This one goes out to all the data geeks: If you could create just ONE statistic to measure the coolest most epic most important thing in the world, what would it be?
Hold on - I didn't ask you what the coolest most epic important thing in the world is to you - I don't care, to be honest. It goes without saying around Zuora that the coolest most epic thing these days is the transformation going on in the economy, from products to services and commodities to relationships. But I know you've heard plenty about that, so I promise to shut up about it for the rest of this post!
I'm asking, how would you go about measuring the most important thing (whatever it is) with a single measurement or statistic? Because that's the decision I had to make when I was handed the task of designing the Subscription Economy Index. In this age of data deluge we've all become used to looking at everything from so many different angles (yes data geeks, I mean high dimension feature sets). And it gets hard when sometimes you have to pick just one number. Really hard. I think in the end we did a pretty a good job with the Subscription Economy Index, but not without a few missteps along the way. With hindsight I'd say that the process went through four steps, and I think this is a good way to work for any time you have to summarize a complicated situation with just one number:
Ask what's most important
Make it dynamic
Make it represent
Make it robust
1. Ask what's most important
I started out with a lot of choices. We're all awash in numbers these days - so many metrics, measuring so many things. It's a really good exercise to stop sometimes and try to remind ourselves what's most important. For the Subscription economy the natural choice might be subscriptions and subscribers (duh!) but then everyone likes to talk about churn. And how about the products and services themselves or transactions? I felt a need to step back from specific numbers and ask what is it that people really want to know about the Subscription Economy? My first idea was how big is it? Or another way to put is how much; how much business is really going on? So the idea was to make some kind of metric of how big the Subscription Economy is because that seems like the most important thing. But what?
2. Make it dynamic
I went through some ideas of statistics to measure how big the Subscription Economy is like volumes of currency and numbers of transactions but I found myself bored! I realized, just taking the grand total of something is boring. It's a snapshot, but it it doesn't capture the dynamism. That was when I realized, the Subscription Economy Index should measure Growth! Not how big, a different way of looking at how much, which is really How fast is the subscription economy growing? Because one thing we see again and again in the modern world is that exponential growth in a new model can swamp the old order of things in no time. So the best metric to capture a new emerging space like the Subscription Economy is the growth rate.
3. Make it represent
Then next step was deciding how to actually measure that growth. The growth in the number of subscribers? The number of services? Going back to step 1, the key was to measure the growth that people really want to know about. The people interested in this index are most likely going to be companies in the Subscription Economy or investors in those companies so I decided that the growth we measure should be the growth of the Subscription Economy companies themselves. And to make it really representative, the statistic had to represent the growth experienced by a typical Subscription Economy company, and not the aggregate growth of all the Subscription Economy companies together (that might be more interesting to an economist). That lead me to the idea of calculating each company's growth separately and then making the statistic represent the average or typical value, and that's the Subscription Economy Index, in a nutshell.
But there was a catch: Remember I said the people interested in this are not only Subscription Economy companies, but also investors? We wanted the statistic to represent typical customers, but at the same time we wanted the statistic to represent the size of the opportunity for investors. So if a "typical" compa is a small opportunity for investors we should over-represent the large companies that held a bigger share of the opportunity, and make the statistic more meaningful for investors . Fortunately, before I was a data scientist in Silicon Valley I was a quant for a Wall Street firm, and this kind of thinking was very familiar to me. The tried and true solution here is to use a weighted average with constraints, just like in a Stock market index. That way the Subscription Economy Index would represent a balance between the "typical" mid-size companies (most interesting to the companies) and the uncommon large companies that represent more of an opportunity (interesting to investors).
4. Make it robust
At this point I was really excited about the index. But when I first ran some calculations it looked a bit odd: the growth was just too high to believe, and there were also odd quarters where some of the sub-indices were wildly inconsistent with the overall average. At that point I realized I had a serious outlier problem. And that brings us to the last step in making one great statistic: make sure it is robust!
If you've ever worked with growth rates you know there's only one rule: The minimum value is minus 100%. Other than that, all bets are off - because a growth rate is a ratio it explodes when the denominator is small (and of course it's undefined when the denominator is zero but that's not really a problem of robustness here since we just don't take a measurement in that case). Think about a startup that experiences 1500% growth from a starting point of $100 - hurray, they're making $15K! If only they could keep that growth rate going to $1M, but of course they can't. And even a weighted average can become badly deranged in the presence of extreme outliers. So I knew there had to be some kind of outlier removal as part of the process.
After some experimentation with historical data, I settled on removing the top and bottom five percentiles. Another thing that helped was using an extended burn-in period on each constituent before it became part of the average. That is partly a matter of robustness, since the most volatile period (most quarter to quarter change in the growth rate) was typically early in the life of each constituent. But it's also a matter of representativeness: We don't want the index to represent what happens when a constituent is still going live in the service or just starting out, we want it to show what they look like when they have hit their stride. One last thing about robustness: In order to make sure a statistic is robust you'd better play around with some historical data!
Wait a second, didn't I say there were four steps? Well if you're a practicing data geek (and not the armchair variety) you would know that for anything important you almost never get it right the first time. So the actual process looks about like this plate of spaghetti:
Don't be afraid to go through a few versions and collect feedback from everyone you can.
The Beginning, Not the End
So there you have it: How to design a great statistic in four not-so-easy steps! All you have to do is figure out how to capture the dynamics of the most important thing, while making sure it is representative to multiple audiences and has a robust implementation. (You thought it was going to be easy? Never said that.) But of course that's just the beginning - after that you get to watch your metric live and help you understand the real world! After all that hard work, I'm looking forward to watching the Subscription Economy Index for years to come...
... View more