Experimentation

A Tale of Two Retentions -- Two Types of Retention That Most Startups Confuse

A line of customers splitting in an open field

Both numbers were correct. They were measuring completely different things.


Two teams at the same company are both tracking retention. Product reports 70%. Marketing reports 42%. A founder asks in a leadership meeting which number is accurate.

Both are.

This isn't a data quality problem. It isn't a methodology dispute. It's a conceptual mismatch: the two teams are using the same word — retention — to describe two fundamentally different measurements. Neither team knows it.

This kind of confusion is more common than you'd think, and more consequential. Retention is one of the most important metrics in any product-led business. Getting it wrong means making decisions about user experience, lifecycle marketing, and growth investment against a false picture of how users actually behave.

The confusion resolves once you understand that retention comes in two forms: Time Since Retention and Time Over Time Retention. They answer different questions, they're used in different contexts, and they produce different numbers from the same user base. Using the wrong one for your business question doesn't just produce a bad metric — it produces a systematically misleading one.


Type 1: Time Since Retention

Time Since Retention tracks the share of users who are still active X days or weeks after they first joined.

The key feature of this measurement is the reference point: acquisition. Every user's clock starts at zero on the day they registered, made their first purchase, or activated for the first time. You then track what percentage of those users come back 1 day later, 7 days later, 30 days later, 90 days later.

The output is a retention curve. Plot time since acquisition on the X-axis and percentage of users still active on the Y-axis. The curve always starts at 100% on Day 0 (by definition — all users are active on the day they acquired) and declines from there.


What the curve looks like

The typical retention curve for a consumer product drops sharply in the first week. A large portion of users who sign up never return after their first session. This is normal and expected. The curve then flattens as the "loyal core" emerges — users who've made the product a habit.

The key diagnostic signals are:

  • The steepness of the early drop: How many users return after Day 1, Day 3, Day 7? A very steep drop suggests an activation problem — users aren't experiencing the core value proposition quickly enough.

  • The height of the asymptote: Where does the curve eventually flatten out? This plateau represents your long-term retention floor. A higher floor means a healthier product.

  • Cohort differences: If you overlay retention curves for users acquired in different months or through different channels, divergences reveal which cohorts are most valuable — and which acquisition sources bring high-quality vs. low-quality users.


Calculating it correctly

There's a denominator management issue that catches many teams off guard. When calculating Time Since retention at Day 30, your denominator should only include users who have been registered for at least 30 days. Including users who registered last week in a "30-day retention" calculation inflates the denominator artificially and suppresses the metric.

On window type: single-day retention (was the user active exactly on Day 7?) is highly volatile and not useful for most products. Rolling 7-day retention (was the user active in the 7 days around their 7-day anniversary?) is smoother and better reflects how users actually engage. Calendar weekly windows (was the user active during Week 1 after acquisition?) work best for products tied to weekly schedules — sports betting, hotel booking, weekly newsletters.


When to use Time Since Retention

  • Evaluating the quality of new user cohorts from different acquisition channels

  • A/B testing onboarding flows or early-product changes for newly acquired users

  • Tracking product health after a major launch or feature change affecting new users

  • Building subscription or SaaS LTV models that depend on projected churn curves


Type 2: Time Over Time Retention

Time Over Time Retention tracks the share of users active in one period who are also active in the adjacent period.

The key feature here is the reference point: calendar time. You're not tracking any individual user's journey from acquisition. You're asking: of all the users who were active last week, how many came back this week?


The calculation

Period 1 active users = 100
Of those 100, Period 2 active users = 60
Week-over-Week retention = 60%

This metric doesn't care when users acquired. A user who registered two years ago and a user who registered last month are treated identically — as long as they were active in Period 1.


The MECE view: four user states

One of the most useful ways to decompose Time Over Time analysis is to classify every user into one of four mutually exclusive states for any given period:

  • Retained: active in both Period 1 and Period 2 — these are your core users, the engine of the business

  • Lapsed: active in Period 1, not in Period 2 — users who disengaged; a source of concern and a target for re-engagement

  • Returned: not active in Period 1, but active in Period 2, having been active in some earlier period — users who came back; a sign of product magnetism or successful win-back campaigns

  • Adopted: active in Period 2 for the first time — new users entering the active base

These four states are MECE: mutually exclusive (no user can be in two states simultaneously) and collectively exhaustive (every user falls into exactly one). That means your period-over-period active user count is always:

Period 2 Active Users = Retained + Returned + Adopted

A healthy product tends to show a stable or growing Retained base, moderate Lapsed and Returned flows, and positive net Adoption. When Lapsed consistently exceeds Returned plus Adopted, the active user base is declining.


When to use Time Over Time Retention

  • Tracking the overall stickiness of an established product over time

  • A/B testing changes that affect the existing user base (not just new users)

  • Executive and investor reporting on business health trends

  • Building churn prediction models and targeting CRM campaigns at users in the Lapsed state


Why Using the Wrong One Produces Bad Conclusions

Scenario A: You use Time Since for an existing-user question

Your product team ships a new notification feature and wants to know if it increased weekly return rates. They measure Day 7 retention on newly acquired users only.

The problem: the feature affects all users, but the measurement only captures new users. If the feature primarily re-engages existing users who had been lapsing, Time Since retention won't show it. You'd conclude the feature had no impact, when in fact it moved the needle meaningfully for 80% of your user base.

Time Over Time retention — comparing weekly active rates before and after the feature shipped, across all users — would have caught this.


Scenario B: You use Time Over Time for a new user quality question

Your growth team is testing two acquisition channels — paid search and influencer marketing. They want to know which brings in higher-quality users. They look at Week-over-Week retention for all users.

The problem: Time Over Time retention lumps all users together. It can't tell you whether the users from Channel A retained at a higher rate than users from Channel B, because it doesn't segment by acquisition source or cohort.

Time Since retention — plotting separate retention curves for users by acquisition channel — would immediately reveal whether influencer-acquired users have a stronger or weaker 30-day retention curve than paid search users.


Three Ways to Use Retention in Your Business

A/B Testing

Use Time Since when your experiment targets newly acquired users. Compare the retention curves of the test and control cohort from their respective Day 0s. A higher asymptote in the test group means the feature improves long-term retention for new users.

Use Time Over Time when your experiment targets existing users. Control for prior-period engagement to isolate the treatment effect on return behavior.


Gap Analysis and Revenue Modeling

Convert a retention improvement to a revenue impact. If Week-over-Week retention is currently 60% and improving it to 65% would retain an additional 5% of your weekly active users, how much is that worth?

The math: if your WAU is 100,000 and your average revenue per weekly active user is $2, a 5-percentage-point improvement in retention generates 5,000 additional retained users per week × $2 = $10,000 in additional weekly revenue. That's the business case for your retention investment.


Churn Prediction

At the business level: historical Time Since retention curves directly inform LTV and subscriber projections for subscription and SaaS businesses.

At the user level: Time Over Time retention flags users who are entering the Lapsed state — their behavior signals are weakening. These users are ideal targets for personalized re-engagement campaigns, offers, or proactive outreach — before they fully churn rather than after.


Which Type for Which Question?

Business Question

Retention Type

Are new users from this campaign sticking?

Time Since

Is our product getting stickier overall?

Time Over Time

Did this onboarding change improve new user retention?

Time Since (A/B)

Did this feature bring existing users back more often?

Time Over Time (A/B)

What's our subscriber LTV?

Time Since (churn curve)

Who should we target with a win-back campaign?

Time Over Time (Lapsed segment)


Precision in Retention Is a Competitive Advantage

Most companies track "retention" as a single number. The best product organizations treat it as a multi-layered diagnostic — measuring both the new-user health (Time Since) and the steady-state stickiness (Time Over Time) of their product, using each to answer the specific questions it's designed for.

That precision doesn't require more infrastructure. It requires clearer thinking about what question you're trying to answer before you open a SQL query. Getting it right means your product decisions are made against an accurate picture of how users actually behave — which is, ultimately, the whole point.



The complete retention metrics framework — Time Since, Time Over Time, churn modeling, and A/B test design — is covered in The Data Strategist course.

Blog

Categories

About

Contact

Explore

Home

Categories

Archive

Connect

About the Author

Newsletter

Contact