Metrics
Why Your Metrics Don't Match: A Guide to Data Misalignment

The most damaging data problem isn't bad data — it's data that's almost right, from two different sources.
Picture this scene. It's an executive review. The product team presents their monthly active user count: 12,000. Marketing presents theirs: 15,000. Finance pulls from a different export: 11,000.
The executive looks at all three numbers and asks the only reasonable question: "Which one is right?"
Nobody has a confident answer.
This is one of the most corrosive things that can happen to a data practice. Not because the data is necessarily wrong — it might be that all three numbers are technically correct given different assumptions. But the moment an executive stops trusting the data, every future presentation starts with a credibility deficit. The data team spends its time defending its methodology instead of producing insights. And decisions start getting made on gut instinct, because gut instinct never argues with itself.
This scenario is entirely preventable. But preventing it requires understanding exactly why it happens.
Three Types of Misalignment
Data misalignment comes in three distinct forms. Each has a different root cause and a different fix. Most organizations suffering from misalignment have all three operating simultaneously.
Type 1: Source Misalignment
Source misalignment happens when the same metric is being calculated from two or more different data sources — and those sources disagree.
This is remarkably easy to create at a growing startup. Here's what a typical Series C data stack looks like:
Product event tracking: Amplitude
CRM and lifecycle events: customer.io
Event centralization: Segment (a customer data platform, or CDP)
Marketing analytics: Google Analytics and Google Ads
Deep analysis: BigQuery and Jupyter Notebooks
Reporting: Tableau
Ad-hoc work: Google Sheets
Presentations: Google Slides
Every one of these platforms can tell you how many "active users" you have. And none of them will agree with each other.
Google Analytics and Amplitude use different session definitions. Google Analytics is cookied and doesn't recognize logged-in users the same way Amplitude does. Segment routes events, but the transformation rules differ by destination. Each SaaS tool has its own attribution model, its own filtering logic, and its own timezone defaults.
The result: pull "monthly active users" from any three of these tools and you'll get three different numbers. All of them defensible. None of them the same.
The fix: Designate a single canonical source for each metric — almost always your data warehouse (BigQuery, Snowflake, Redshift), not a SaaS platform dashboard. Document which tool owns which metric in your Metrics Catalog. Align on a single definition of "active user," "session," "acquisition," and every other cross-cutting concept, and enforce it across all reporting.
Type 2: Calculation Misalignment
Calculation misalignment is more insidious, because it can happen even when everyone is pulling from the same source.
Consider Sally and Sue. Sally works on the product analytics team. Sue works on the marketing analytics team. Both have SQL access to the same BigQuery warehouse. Both are trying to answer the same question: how many monthly active users did we have last month?
Sally's query uses a 28-day window. Sue's uses a 30-day window — because "monthly" to her means the full calendar month. Sally excludes users with internal email domains (company employees); Sue includes them because she tracks all accounts. Sally counts a user as "active" if they completed a meaningful action; Sue counts any login event.
Sally produces 12K. Sue produces 14K. Both ran valid SQL. Both are technically correct by their own definition. Neither number matches the other.
This scenario is not hypothetical — it plays out constantly in organizations where SQL access is distributed but metric definitions are not. It's especially common after rapid hiring, when new analysts inherit existing pipelines and inherit the undocumented assumptions embedded in them.
The fix: Centralize metric calculations. A dbt project (data build tool) is the gold standard: it enforces a single transformation layer between raw data and reports, version-controls your metric logic, and makes the calculation explicit and auditable. At minimum, maintain a shared repository of SQL scripts with designated owners, peer review before deployment, and documented definitions for every variable. When a calculation changes, log the change and notify downstream consumers.
Type 3: Communication Misalignment
Communication misalignment is the one that organizations underestimate most — because by the time it becomes visible, the damage is already done.
All the SQL is correct. The dashboard is accurate. The warehouse query is consistent across teams. And then a Product Lead presents retention numbers to the executive team using a definition they've subtly misunderstood — perhaps they've been tracking Day 30 retention but describing it as "monthly retention," which technically implies a different window. The exec takes the number at face value and mentions it in a board call. The board member references it in a follow-up question to the CFO, who pulls the number themselves and gets something different.
By the time anyone notices the discrepancy, the wrong definition has been embedded in three decks and a fundraising narrative.
Communication misalignment is hard to catch because data conversations happen mostly verbally and in real time. Executives rarely have the bandwidth to interrogate definitions. People assume alignment because they're using the same vocabulary — "retention," "engagement," "conversion" — without realizing they've diverged on the underlying meaning.
The fix: Make definitions visible in the communication itself. Every report should include the definition of the metric it's presenting, not just the number. Executive presentations should include a one-slide metric glossary. When presenting a change in a metric — up, down, or flat — restate the definition before explaining the movement. This feels repetitive until the first time it prevents a misunderstanding in front of a board member.
The Deeper Problem: How Your Analytics Team Is Structured
The three types of misalignment are proximate causes. But the root cause is often organizational: how the analytics function is structured determines how easy it is to achieve and maintain alignment.
Centralized analytics teams — all analysts reporting to a single function, often under a Chief Data Officer or VP of Analytics — have strong alignment by design. Everyone uses the same tools, the same definitions, the same calculation methodology. The trade-off is domain depth: centralized analysts may lack the deep familiarity with a specific business unit that produces genuinely useful insights.
Decentralized analytics teams — analysts embedded directly within product, marketing, finance, and operations — have strong domain expertise and high responsiveness. The trade-off is alignment: each embedded analyst naturally adapts to the needs and preferences of their business unit, and over time, definitions drift. You end up with five different definitions of "conversion" and three different sources of truth for revenue.
Hub-and-spoke analytics teams — a small central function that owns shared infrastructure, standards, and tooling, plus dedicated analysts within business units — try to get the best of both. The central team defines the canonical metrics and maintains the data platform. The embedded analysts use that infrastructure but bring domain knowledge to bear on analysis. This is the most common structure at Series B–D companies, and when the governance is strong, it works well.
The choice of model isn't permanent. As companies grow, they often migrate from centralized to hub-and-spoke to partly decentralized. What matters is that the transition is managed deliberately — not that misalignment is allowed to accumulate by default as the organization evolves.
The Cost of Not Fixing This
Misalignment isn't just an inconvenience. It has compounding costs.
When executives stop trusting data, decisions revert to intuition. When decisions are made on intuition, data loses its seat at the table — and the investment in analytics infrastructure stops paying returns.
When analysts spend half their time reconciling conflicting numbers, they're not spending that time on analysis that would actually help the business. The opportunity cost of a good analyst debugging a metric discrepancy is a genuine insight that never gets produced.
When cross-functional projects depend on shared baselines that don't exist, they stall. Growth teams and product teams can't run coordinated experiments if they're not starting from the same definitions.
And when new hires join — analysts, product managers, executives — they have to learn an oral tradition rather than a documented system. Onboarding takes longer. Errors propagate. The tribal knowledge walks out the door with every resignation.
A Prevention Checklist
Misalignment is a debt problem. Small inconsistencies accumulate into large disputes. The best time to address it is before the executive review where three numbers don't match.
Every key metric has a written definition accessible to all teams
Each metric has a designated owner and a canonical source
SQL scripts are version-controlled and peer-reviewed before deployment
Reports include metric definitions, not just numbers
Presentations with metrics include a definition appendix or glossary
New hire onboarding includes a session on the company's core metric definitions
When a metric changes — in value or in definition — change management is documented and communicated
Alignment Is a Practice, Not a Project
You won't achieve perfect metric alignment in a single sprint. You won't fix it by writing a memo. It's a practice — maintained through documentation, governance, communication, and the organizational discipline to revisit definitions as the business evolves.
But every definition you document, every calculation you centralize, and every presentation you clarify is an investment that compounds. The payoff is an executive team that trusts the data. And a data team that earns — and keeps — a seat at the table.
The full Metrics Catalog methodology and analytics team design frameworks are covered in The Data Strategist course.
Blog
Contact
Explore
Home
Categories
Archive
Connect
About the Author
Newsletter
Contact