Metrics
The Four First Principles of Good Metrics (Most Startups Violate at Least Two)

Having metrics and having good metrics are two very different things.
Every startup tracks some numbers. Founders know their revenue. Product managers watch their activation rates. Growth teams report on acquisition. There are spreadsheets and dashboards and weekly Slack messages with screenshots of charts.
But having metrics is not the same as having a metrics framework built on first principles. Most of the numbers that organizations track are selected because they're easy to pull, not because they drive decisions. They're reported in ways that can be selectively interpreted. They're defined differently by different teams. They don't cover the full scope of what needs to be measured.
The result is a metrics practice that feels rigorous — there are dashboards, after all — but doesn't actually hold the business accountable to anything.
A good metrics framework operates like a system of checks and balances. It doesn't just confirm what you hope to see. It surfaces the things you'd rather not know, while you still have time to act.
The framework that achieves this is built on four principles: Relevant, Precise, Aligned, and Comprehensive.
Principle 1: Relevant
A metric is relevant if it directly supports the company mission — tracing a clear line from what you measure to why it matters. In practice, that line runs through OKRs: company Objectives cascade into department-level Key Results, which cascade into team-level KPIs. Every metric should sit somewhere in that chain.
Consider Medium's Personalization team. The company mission is to empower writers to share their best work. The team's objective is to show users relevant and inspiring articles. The KPI that benchmarks progress is view-to-click ratio — the percentage of article previews that users click through to read. That's a highly relevant metric: there's a direct, defensible link between the number and the mission.
Now apply the reverse test to your own metrics: if you removed this metric from your reporting, would any decision change? If the answer is no — if it's just there because someone thought it was interesting, or because it was easy to add to the dashboard — it's not relevant. It's noise.
The most common relevance failure isn't tracking bad metrics. It's tracking too many metrics that are loosely related to the mission, diluting focus, and creating the cognitive overhead of managing a portfolio that nobody really uses.
Principle 2: Precise
Precision is about consistency — in how a metric is defined, calculated, presented, and discussed. A metric can be entirely relevant to your mission and still be useless if different people compute it differently, or describe it differently to different audiences.
Precision has four components.
Defining: Write down exactly what the metric is. Store it somewhere everyone can reference. "Retention" means nothing on its own. "Retention is the percentage of customers who make a second purchase, on a different day, within 90 days of their first purchase" is a definition.
Querying: Ensure that the calculation is consistent across every analyst and every tool. The same metric should produce the same number whether it's queried from your data warehouse or pulled from a SaaS dashboard.
Presenting: Provide enough context that your audience can't be misled. A number without a denominator, a time period, or a comparison to baseline is an invitation to misinterpretation.
Discussing: Couple every business claim with a specific metric and its definition. When you make a claim about performance, the claim should be falsifiable.
The contrast between imprecise and precise language is stark:
❌ "Retention is the percentage of customers that come back"
✅ "Retention is the percentage of customers that make a second purchase (on a different day) within 90 days of their first purchase"
❌ "Our test feature should perform much better"
✅ "Our test feature should drive a 2% lift in pageviews per session without negatively impacting engagement or retention"
❌ "This was our best campaign of the year!"
✅ "With a similar budget to other campaigns, our CTR was 5x the yearly average"
❌ "Engagement has been much stronger"
✅ "DAU/MAU has increased 30% since the redesign"
Each imprecise version is defensible in isolation. Each precise version is falsifiable — which means it's actually useful for making decisions.
Principle 3: Aligned
You can have the right metrics, precisely defined — and still have a misalignment problem. Alignment means that everyone in the organization understands and uses metrics the same way.
There are three places misalignment creeps in.
Source misalignment happens when the same metric comes from different data sources. Consider a typical Series C startup: they might be running Amplitude for product events, customer.io for CRM events, Segment as a CDP, Google Analytics for marketing, BigQuery for analysis, Tableau for reporting, and Google Sheets for ad hoc work. Every one of these platforms has its own definition of an "active user," its own session logic, its own attribution model. Pull "monthly active users" from three of these tools and you'll get three different numbers — all technically correct, none of them the same.
Calculation misalignment happens when two analysts computing the same metric from the same underlying data use different logic. Sally on the product team and Sue on the marketing team both query BigQuery. Sally uses a 28-day window; Sue uses a 30-day window. Sally excludes internal users; Sue doesn't. Product reports 12K MAU. Marketing reports 14K MAU. Neither is wrong. Both erode trust.
Communication misalignment happens when accurate data gets interpreted or presented inconsistently. All the SQL is correct, the dashboard is accurate, but a Product Lead has been sharing retention numbers using a definition they've slightly misunderstood — and that misunderstanding has now circulated company-wide. By the time anyone catches it, the error is embedded in three board decks and a fundraising pitch.
The fix is organizational as well as technical. Bottom-up alignment means building person-to-person and team-to-team consistency through documented definitions and shared tooling. Top-down alignment means choosing an organizational structure — centralized, decentralized, or hub-and-spoke analytics teams — that deliberately manages the tension between consistency and domain expertise.
Principle 4: Comprehensive (MECE)
The fourth principle comes from McKinsey's framework for structured thinking: Mutually Exclusive, Collectively Exhaustive. Applied to metrics, it means your portfolio should cover all important dimensions of performance, with no meaningful gaps and no redundant overlap.
Here's why this matters in practice. Imagine your product team runs an experiment. The test feature drives up NPS. Promising. But retention drops. And page load time increases. If your dashboard only shows NPS, you ship the feature. If your dashboard shows all three — and they're in tension with each other — you have a real conversation before making a harmful decision.
A comprehensive metrics framework acts as a system of checks and balances. Every metric should have something that can contradict it. Engagement up? Does retention follow, or is it just a short-term sugar spike? Acquisition up? Does conversion quality hold, or are you attracting users who churn faster?
Comprehensiveness doesn't mean tracking everything. It means that your portfolio, as a whole, covers the full performance story — so that a single flattering number can't hide a deterioration somewhere else.
Auditing Your Current Metrics
If you want to evaluate your current metrics practice against these four principles, the process is straightforward.
List every metric your team tracks regularly. For each one, ask:
Relevant? Can you draw a line from this metric to an OKR? Would removing it change any decision?
Precise? Does it have a written definition? Is the calculation consistent across tools and analysts? Is context always provided when it's presented?
Aligned? Does everyone who references this metric mean the same thing by it? Do they pull it from the same place?
Comprehensive? For each metric, is there a counterbalancing metric that would catch a false positive? Are there meaningful dimensions of performance that your portfolio doesn't cover at all?
Most organizations find that they pass the Relevant test reasonably well — they're tracking things that matter. They struggle more with Precision (definitions are loose, language is vague) and Alignment (different teams use different sources). And they almost universally have gaps in Comprehensiveness — the portfolio has been assembled incrementally rather than designed to cover the full performance landscape.
Prioritize fixes by business impact. A metric that's imprecisely defined and used in every board presentation is worth fixing before a metric that lives in one analyst's notebook. Start where the misalignment does the most damage.
Four Principles, One Goal
The goal of a metrics framework isn't more numbers. It's better decisions.
These four principles are the guardrails that separate a metrics practice that creates clarity from one that creates noise. Not every organization needs to achieve all four simultaneously. But every organization needs to be honest about which ones it's violating — because the downstream cost of imprecise, misaligned, irrelevant, or incomplete metrics is decisions made on a false picture of reality.
Relevance, precision, alignment, comprehensiveness. Apply them systematically and the numbers on your dashboard start to mean something.
The full Metrics Catalog methodology — including templates for defining, querying, and documenting your metrics — is covered in The Data Strategist course.
Blog
Contact
Explore
Home
Categories
Archive
Connect
About the Author
Newsletter
Contact