top of page

CLV Part V: The Backbone of CLV

As the backbone of any CLV model, the customer decay curve is arguably the most important driver. This certainly extends beyond the exercise of modeling CLV. As we noted earlier, it's crucial to obsess over your customer decay curve, and how to elevate it.


When we say "customer decay curve", we're modeling out what % of paid customers pay only once, 2 times, 3 times, and so on. Said another way, we want to know what share of our customers are going to cancel in the first month, second month, etc. This helps us understand how much, on average, we can expect to earn from people who subscribe.

In the workbook, the decay curve assumption is towards the bottom of the "CLV Model" tab (line 25). We've highlighted it and added a thick border in the image below. There's also a visual of the decay curve assumption in the left line chart. This can help us grasp how the curve moves as we adjust assumptions.










Pre-launch With No Data

To continue guiding us through, let's resurface our hypothetical newsletter start-up: Big Dean's Boardwalk Wings, a weekly exploration of innovative wing recipes.


The team behind Big Dean's ("BD") is in the planning phase and has no clue how well it will retain its subs. Many operators will face a similar challenge of limited or no data, especially early on.​


A Few Data Points Can Be Quite Helpful


For those just starting, being resourceful can go a long way. A few data points will go a long way in guiding initial assumptions. It'll also provide context as our retention data comes in. If using a platform (i.e. Patreon, Substack), it doesn't hurt to ask for guidance. Probably more effective, ask operators running similar newsletters for ballpark figures.


The BD team finds similar Food & Drink newsletters on Substack's Discover section. They also scour recommendation services for food-related newsletters (i.e. Newsletter Stack). The BD team then reaches out to several operators, using their network where possible. Most of the operators are incredibly friendly and helpful. The discussions lead to a handful of useful data points, as well as tips on best practices and other insights. (Retention rates can be sensitive data. It's wise to take a respectful approach and manage your expectations.)

Always Nice to Be Pleasantly Surprised

While useful, the data points don't necessarily paint a definitive view. The team still has to choose from a wide range of possibilities for the initial decay curve assumption. Early on, the goal is not precision - we're only trying to get in the right ballpark. 


Also, with any assumption, it's always nice to be pleasantly surprised. It's usually prudent to err towards more conservative assumptions for any performance metric. This is especially true with limited or no data. 


So let's say the BD team discovered similar newsletters retained 50% to 70% of paid subs after the first year. It's probably better to model the initial curve towards 50%. If you outperform, wonderful! We can update our assumption with the data coming in (which we cover in the next post). We can then use the updated curve & CLV to guide adjustments in decision making & strategy. But again, surprises are always better when they're pleasant, so make assumptions accordingly.

Back to the CLV Model


The Big Dean's team is much better positioned with their initial target of 50% retained after year 1. But how do they actually model out the decay curve towards that target? Let's return to the model and dive into the "CLV Model (2-yr) w Power" tab.


Before diving in, I should mention - I'm laughably far from a mathematician. The approach I'm describing is country miles from the most sophisticated approach. I'm attempting to guide through a process that I can do, which means anyone can do it. But by all means, please use this as a launchpad and build in sophistication that leads to more accuracy. 


From earlier, remember it's highly unlikely our decay curve has a linear decline. Usually, paid subs are most likely to cancel in the first month. With each successive month, the likelihood continues to decrease. There are exceptions, but let's sidestep those for now.


To model out this type of behavior in our initial decay curve, and avoid a linear decline, we can use a power function. In a power function, we take a variable and raise it by a constant power. In our model, the variable is time, expressed in months (to map to monthly subscriptions). The constant power is the declining likelihood that a sub cancels relative to the passage of time.

The constant power is our key assumption (located in cell L-17 in bold and blue text). We can adjust the power function until we arrive at our target, based on the data points we've collected. For the BD team, they adjust the driver until 50% of subs are remaining after year 1. (Which ends up being a constant power of -0.28.)

Echoing from above, this simple approach is unlikely to generate accurate predictions. We're only trying to get in the right ballpark, and lay a solid foundation for when we have data. Don't fool yourself into thinking this is a crystal ball. It's probably best to hold on using this for major decisions until we have data to drive the model. Which brings us to our next section...

Using Data to Help Predict the Future

Once launched, we can use retention data, as it comes in, to update our projections. The more our CLV model becomes data-driven, the more confident we should be in the CLV projection.

But first, what is retention data?


There are various ways to structure retention data. But let our purpose be the guiding light. The key question we're attempting to answer is what share of subscribers just signing up will only pay once, or only pay twice, and so on. 

After launching in July 2020, the Big Dean's team has been in a frenzy, continuing to ship and improve the product. In February 2021, they're finally able to carve out time and look at how well they're retaining their subs. They download their retention data, perhaps from Stripe and/or other billing platforms. To see what this could look like, let's check out the "Retention Data Example" tab in the model.

The dark blue "Current Date" driver represents when we're accessing the data (2/5/21 in our example). Our subscription data is constantly updating as subs continue to pay or cancel. Our current data set is updated up until our "current date" driver. (More on why this is important in a second.) The light blue text is dummy retention data - let's cover each column.


"Cohort_month" represents the month in which a new subscriber pays for the first time. For monthly subscription products, it's helpful to bucket subs into monthly cohorts. That said, more granular options (daily, weekly) can be valuable. As can higher-level groupings (quarterly, annual). For now, let's group subs into monthly cohorts.

"Successful_payments" represents how many times a sub has paid at the moment when we pull the data (in BD's case, 2/5/21). "Subscriber_count" sums up the amount of subs that meet that meet the criteria of that row.


For example, in row 5, there are 1,000 subs that paid for the first time in July '20, and have paid at least one time. In row 6, there are 850 subs that paid for the first time in July '20, and have paid at least twice. From these two data points, we can calculate how many of the July '20 paid subs did not make it to their 2nd payment. 1,000 paid at least once - 850 paid at least twice = 150 subs only paid once. Or 15% of our July 2020 cohort only paid once.

We can carry this logic through to our other monthly cohorts. This is our first glimpse into how retention is trending over time. For example, the BD team notices an improvement in retention after the July 2020 cohort. The percent of subs making it into the 2nd payment has hovered above 90% (vs. 85% for July '20 cohort). An encouraging sign...


We can also form a blended decay curve (column R). The key is to only include subs that have had a chance to reach a certain point. We wouldn't want to include subs that joined in the past few months in our month 6 retention rate. (This is the basis for the cells with "n/a" in the monthly cohorts.)


On a related note, the higher the sub count, the more confident we should be in the predictive power of a data point. For example, we've had over 7,000 subs that have had the chance to pay at least twice in our blended decay curve (S2 cell). But we've only had ~700 subs that have had the chance to pay 6 times (S6). We should feel much more confident in the month 2 retention rate compared to month 6 retention.

There's a world of opportunity to dive deeper. A few important ideas we'll return to in future posts:


  • The power of segmentation. Averages are dangerous. Insights become abundant when we observe how the decay curve shifts across cohorts. And there are endless ways to segment our subscribers. Crucially, these insights will inform our efforts to improve retention.

  • Involuntary vs. voluntary cancels. Voluntary is when a sub chooses to cancel. Involuntary cancels come from payment issues, where the sub doesn't actively choose to cancel. Involuntary cancels are a huge challenge for digital subscription products.

  • Decay curves for uninterrupted subscriptions vs. total payments across multiple subscriptions. The latter gets into subs that cancel and then sign back up later.

  • When to stop using data. Over a longer period of time, some of our retention data will become stale. Eventually, we may decide to limit the data we use for our projections. 


For now, let's turn back to the BD team. We're now equipped with a blended decay curve, which we can use to update our CLV model.

How do we update our CLV model with data?

The BD team is ecstatic to have actual retention data. They return to their CLV model, feeling empowered. Right away, they notice their blended curve is better than their initial projections. Big Dean's has retained 73% of their paid subs after 6 months (vs. initial projection of 61%). The outperformance nudges their customer lifetime up to 13.6 months (vs. 13.0 months before).









That said, the BD team realizes the curve looks a little funky. The power function has also lost some of its predictive power (measured by R²). The team decides to adjust the power function to account for the data flowing in. Here's a step-by-step guide to updating the power function and outer-month projection.


Step 1 - Create a line chart with our blended decay curve. 









Step 2 - Right click on the line and add a trendline. Choose the power function and forecast forward 18 months (to map to 24 month max lifetime). Select to show the equation and R-squared on the chart.

Step 3 - Input the new power function from the trendline into the power function driver. Fine-tune the power function, keeping an eye on the R² and shape of the curve.


















Step 4 - Rejoice! We now have a data-driven CLV model. We can rinse & repeat the steps above as we receive more data. And as our model becomes more data-driven, our confidence in its output will grow.  


Encouraging Early Signs


To close this section, a brief aside. One of the reasons I’m more excited about the passion economy is the quality of performance metrics. ​


A few operators have shared their retention rates on Twitter or in their newsletter. A promising, early sign: the retention rates are very strong. The few operators that shared are retaining 80% to 90% of subs after the first year. For comparison, Netflix and other streaming services retain 30% to 60% of subs after the first year. Before we get too excited, there's a million caveats, especially with any comparison.


But for creators that maintain this level of retention, they will have very high LTV's. This is exciting for many reasons. First, higher LTV's mean we don't need millions of paid subs to build a sustainable business. Also, high LTV's leave room to invest in the product, growth, and other areas of the business. These types of investments, if done effectively, will improve our product, and the awareness of our product. This positive loop will allow us to reach a sustainable run rate, and do so more quickly. This will become more apparent in our next chapter, where we build out an operating plan.

Screen Shot 2021-02-02 at 12.54.08
Screen Shot 2021-02-09 at 11.34.49
Screen Shot 2021-02-10 at 12.48.48
Screen Shot 2021-02-10 at 1.44.30 PM.png
Screen Shot 2021-02-11 at 10.57.52
Screen Shot 2021-02-11 at 11.07.45
Screen Shot 2021-02-11 at 11.36.35
Screen Shot 2021-02-11 at 12.58.21


Even typing this invokes a stern image of Nassim Nicholas Taleb. My point is that data is more useful than absolute guess work. But we should never consider these projection models absolute truths. Stay humble, or the world will likely make you humble.



It's risky to extrapolate a few data points across all creators. Most creators are in the early innings of building a business and focused on a small, core audience. Time will tell if current retention rates hold.


The data on streaming services is from a 3rd party. There's margin of error in the data collected, as well as the methodology. A true apples-to-apples comparison, even with cooperation from all parties, would be challenging.

bottom of page