
Customers Don't Always Churn
Sometimes They Fade. And That Changes Everything - (Part 1)
Mark Sage - 12 min read - 05/01/2026
Those who know me, know that I use a few phrases regularly — possibly a little too regularly, and one of these is to ‘measure what matters and do more of what works’.
It’s a simple phrase but it speaks to the need to ensure that whatever you’re measuring, you’re actually able to see the impact. More than that though, it also speaks to how measures truly influence outcomes. Get the measures right and you can really achieve great things; get them wrong and you have an organisation in disarray.
One measure in particular sits at the core of loyalty — that is churn — and it’s typically visualised, and its benefits extolled, through the image of a leaky bucket.
The water in the bucket represents your current customers and the inflow at the top is your customer acquisition efforts, acting like a tap that’s filling the bucket up. Sometimes it’s flowing faster, and sometimes slower, depending on the market, season and your investment.
In an ideal world, your bucket would just keep filling, but for lots of reasons, that isn’t the case. And so, at the bottom of the bucket is a hole — and through this hole flow your lost customers. Depending on the size of the hole, you might be losing customers quickly or slowly.
Given loyalty marketing’s main role is to retain customers, then the flow through this hole at the bottom of the bucket is pretty important.
If you can slow it down, then in theory, you increase the water level in the bucket — you increase your total active customers, and by definition, your market penetration (i.e. those people in a given market who have bought from you within a given period).
For decades, we’ve used this mechanism — and symbolism — within loyalty to create a measure called ‘churn’. This measure uses a simple formula calculated as customers lost ÷ customers active. Essentially, how many customers flow out of the bucket versus those currently in the bucket, within the time period — say monthly.
The aim of the churn metric is to show how you’re retaining customers, and for loyalty marketers, this is not surprisingly, a core focus — stem the leakage.
Metaphorically, the loyalty marketer is like the “Little Hero of Haarlem”, who saves his country by putting his finger into a leaking dike hole to stop the water. Using loyalty mechanics and marketing, like the little boy, our aim — and you could say our whole reason for existing — is to shrink that hole so that we slow the tide of customer loss.
If done well, then your “bucket does flow’eth over”.
In a famous HBR article by Reichheld and Sasser — way back from 1990 — they suggested that “companies can boost profits by almost 100% by retaining just 5% more of their customers” - so that bucket is potentially flowing not just with water, but with money!
But this metric isn’t a bucket of water at all — it’s a crock of nonsense
(Much like the Reichheld & Sasser stat which has been debunked).
Instead, this measure of churn is compound illusion masquerading as insight, and as Dale Harrison argues — a thought leader in the marketing measurement space — “the typical Churn Metric [..] averages away almost all the useful information needed to understand the underlying patterns of account loss” — it’s not simply unhelpful, it can actively obscure the truth.
If things are going well, then as long as the flow of water at the top is fast enough — and you keep bringing in new customers, the bucket always looksfull. This is what happens in a growing business or a growing sector. It has little to do with your loyalty efforts and a lot to do with the market and your brand marketing. You might be losing huge numbers of customers after their first purchase, but if lots of new people are still coming in, then the overall customer base grows. On paper, churn looks low and as a loyalty marketer you can sleep easy; in reality, you’re bleeding customers and you just can’t see it because new ones keep masking the loss.
Now if we flip the situation.
The flow of new customers arriving slows down — maybe because of seasonality, the economy, or marketing pull-back — then your bucket isn’t being replenished at the same rate. However, the hole at the bottom is the same, so the same number of people are leaving as before. In reality, nothing has changed for existing customers, they are still leaving at the same rate as before, but because your acquisition rate has slowed, your churn rate now looks higher and it feels like something has gone wrong. As a loyalty marketer, all eyes are on you as to why you can’t keep the customers.
It’s the same leaky bucket, and the same loyalty marketer, but the churn rate is varying wildly.
This is what Harrison highlights as the fatal flaw of traditional churn measurement. The metric is a “compound average” that hides underlying patterns, just as the average temperature of a year hides the difference between summer and winter.
In Harrison’s words, the “highly distinct patterns [..] completely disappear when simply looking at the normal [..] churn metrics”, erasing what he identifies as the early-life drop-off, smoothing the long-term decay, and ignoring the small but crucial group of ‘die-hards’ who never churn.
The bucket metaphor assumes a steady flow in; but real markets don’t behave like that. A better metaphor may be to think of potential customers like water coming from a garden sprinkler.
The spray is moving around constantly and our job as marketers is to ensure we capture as much as we can.
This happens in two ways.
Firstly, we need big enough buckets. If our bucket is too small, then we’re only going to catch so much water.
Second, we need to ensure our bucket is where the spray is, and this most likely means we need multiple buckets, each targeting a different opportunity.
So, it’s less about blocking a hole in “a” bucket, than ensuring that all our buckets reach the right audience.
That reach part — the size of the bucket — is always going to be a function of money. Having large enough budgets to ensure both high marketing reach and strong physical distribution. This is why the double jeopardy law states that bigger brands have more loyalty — simply because they are bigger.
The placement of the buckets though is also important, and this is where loyalty marketing can help, and where that concept of churn comes back in. Not churn in the simplistic sense of how many customers were lost versus retained, but in the sense of which customers you lost and when.
Knowing how long particular groups of customers stay, can help you to move the relevant bucket to capture both more and better customers.
Now, instead of asking how many left this month, we can ask something much more important, which is once a customer joins, how long do they stay? Or to use Harrison’s words “how long did the account survive”, because the better approach to understanding churn, is to do a Cohort-Based Survival Analysis.
This approach seeks to track customers from their first day or first shop, and group them into meaningful cohorts (month joined, segment, channel, offer, etc.) — these are your different buckets — so you can see the actual pattern of churn over time.
When you do this, generally the same three shapes always appear — a brutal early cliff, a long-term decay curve, and a final plateau of “die-hards” who never leave. Harrison notes that these patterns appear in every category where true churn exists — and so any metric which hides these shapes is simply not useful.
That early cliff is different for different industries, but you will always see an early drop-off from ‘one hit wonders’ — those customers who sign-up, but you never see again.
Harrison’s insight is that the decision to churn almost always happens long before the churn metric registers it. In SaaS, customers stop using the product weeks before they cancel. In travel retail, many customers never have any intention to return — this was simply a dream trip and a chance encounter.
The cliff shows up in transactions, but it’s caused by behaviour; customers often decide not to return long before the system registers their absence.
On yuu Rewards, we retained 80% of those we initially enrolled over the first two years — so our cliff was small. In contrast on DFS CIRCLE — the loyalty programme for the global travel retailer DFS — we lost 80% after the first purchase.
This wasn’t a failing of loyalty; it’s simply a structural dynamic of the market. One is an everyday category, seeing customers shop 2–3 times a week, and the other is luxury travel retail, seeing some customers just once a year or less.
What really matters is to understand where your customers have come from (channel / offer / market), and how long they are staying — how long did they survive.
Knowing this lets you focus time and attention on those marketing channels delivering better customers or fixing those product and service challenges which trigger a drop off. In both cases, you’re then filling your bucket with customers that survive a little longer.
When growth masks truth
Launching and running DFS CIRCLE gave me hands-on exposure to just how misleading churn and retention metrics can be in a volatile, high-flux environment.
When I arrived, the loyalty numbers looked reassuringly strong. Every dashboard spoke the same language, with our “active member” base growing, retention appearing solid, and the programme looking healthy. On paper, it seemed we were doing everything right.
But as I spent more time with the data, it became clear that the picture wasn’t as clean as the headline metrics suggested.
There was one number missing entirely from our reporting — we were losing 80% of new members after just one purchase.
In most sectors this would be catastrophic, but in travel it’s not unusual. You naturally see a constant stream of tourists — people who buy once, enjoy the experience, and then disappear into the ether, never to return (at least not to that destination or store). Many airline programmes have the same challenge; most frequent-flyer schemes have only a single-digit percentage of their members earning anything meaningful.
Emirates, for example, reported just 2.5% of their members in Silver tier or above. Their base tier ‘Blue’ is pretty much 97% of all their members, and you don’t even need to fly to be in that tier.
So the 80% survival cliff didn’t worry me. What worried me was that none of our reporting showed it — exactly the trap Harrison describes when compound averages flatten the cliff into a comforting line.
Two effects were hiding the problem.
First, DFS was in a growth market. New customers were coming in month after month, and the flow of new shoppers was large enough to wash away the old. By the time a “one-time” tourist fell out of the 24-month active window, they had already been replaced several times over by new visitors. This constant inflow made the programme look vibrant. We weren’t retaining better; we were simply acquiring faster.
Second, our definition of “active” was incredibly generous. We counted anyone who had shopped within the last 24 months as retained. That meant someone could buy once, vanish for one year and eleven months, and still be treated as a loyal, engaged member. In travel retail, that’s extremely long, and it guaranteed that we would fool ourselves into believing we had more repeat customers than we actually did. High-value shoppers could quietly disappear for months, and we would never notice.
Everything changed when we stopped looking at the programme through a rolling two-year window and began examining customer behaviour through survivorship cohorts.
Instead of treating the member base as one amorphous mass, we looked at every member to understand when they essentially dropped away — building a churn curve that allowed us to see the initial cliff and then see the long tail of those that survived. From this, we created three simple groups — one-visit, two-visits, and three-plus visit members.
Almost instantly, the picture sharpened because survivorship analysis stops averaging the cliff, the decay, and the die-hards into one meaningless number.
The single-visit cohort behaved exactly as expected — enormous in volume and evaporating almost immediately. But single visit didn’t mean single purchase — we counted any number of purchases within the first 7 days as a single visit. As a travel retailer, customers would typically stay at a destination for a few days and inevitably pop in and out of our stores. We weren’t interested in that single trip, we wanted to see if you came back again after that.
The two-visit cohort was far smaller but more promising; these customers showed early signs of engagement but were far from secure. To be in this group, you needed to have done two ‘trips’ and using the churn curve analysis, we could see that the real change was where the second visit was seven days or more later. So essentially, two ‘visits’ within 12 months that are more than 7 days apart would land you here.
The three-plus group was smaller again, but far more stable. These were the closest thing we had to genuinely loyal customers. Once someone passed that third-visit threshold — coming into our stores at least three times in 12 months — then their behaviour looked much more like what you’d expect from a retained shopper.
Tracking how customers moved between these groups finally revealed the truth that the aggregate numbers had hidden.
We could see where the real drop-offs occurred, how quickly new customers disengaged, which cohorts carried long-term value, and how different channels and locations performed. Most importantly, we could now measure our direct impact — our ability to bridge customers from one frequency cohort to the next.
We didn’t stop there. Within those three purchase-frequency groups, we added extra layers tracking where customers lived and where they shopped (indicating their propensity to repeat), and tracked the breadth of their shopping behaviour — single category vs. multi-category.
As expected, customers shopping across multiple categories and/or multiple destinations were significantly more valuable.
This wasn’t spend tiering — we already had tiers. This was a cohort-based behavioural segmentation, designed to track how groups of customers survived over time and how effectively we could influence their movement.
Each month, the reporting now told us how each cohort was performing and surviving. We could see, for example, that the highest-performing cohort stayed 116% longer than the lowest, or that we were increasing the number of regular shoppers — and whether that increase came from net-new customers or reactivated ones. That’s Harrison’s customer half-life in practice — different cohorts decay at different speeds; your job is to lengthen those half-lives.
With the picture finally clear, we could focus our effort.
We were under no illusion that we couldn’t turn the 80% “one-hit wonders” into long-term regulars — in travel retail, that isn’t the game.
For those “Visiting Shoppers,” the aim was simple. Maximise their time in-market, grow their baskets, and grow their purchases while they were still in town. Separately, we’d give them a reason to return later — as part of a new trip — which if they did, meant they’d become “High Potential”. Two visits meant there could be more visits, so we doubled down our efforts to guide them into the three-plus cohort.
To track marketing effectiveness — essentially, to know where to place the bucket — we analysed each cohort by campaign and offer to understand which channels brought in the best long-term customers. A social-network acquisition campaign, for example, delivered only a 6% repeat rate and half the expected spend. By contrast, a tier-match campaign with a credit-card company delivered a 27% repeat rate and spend 43% above normal. Once you start tracking survival, you quickly learn which channels attract low-value skimmers and which attract genuine future loyalists.
Overall, the DFS experience taught me a valuable lesson about loyalty measurement.
Growth can make a programme look healthy even as it is silently collapsing. A retention metric is only meaningful if it reflects true customer behaviour, not the cosmetic effects of a constantly replenished base.
It’s also easy to be misled by big numbers — especially when they move in the right direction — but the real story of loyalty sits not in the volume of customers you have today, but in the journeys those customers take tomorrow. When you finally examine those journeys — as we did at DFS — you often discover a very different story than the one your dashboards may have been telling you.
And this is exactly Harrison’s point — churn only makes sense when customers truly churn. In categories where customers ‘fall’ (like travel), survivorship analysis reveals the cliff. In categories where customers ‘fade’, almost imperceptibly (like grocery), churn becomes almost meaningless.
Which brings us to yuu Rewards, and why for a programme like this, churn stops being a helpful measure altogether — and an entirely different set of metrics becomes far more revealing.
Fading, not falling — this is the topic we’ll cover in Part 2.
