Startup Metrics That Matter: What to Track After You Launch
Every analytics platform gives you a dashboard full of numbers. Visitors, sessions, bounce rate, pages per session, average session duration. Most of it is noise for an early-stage product.
The question isn't what you can measure — it's what you should measure. Here are the six metrics that actually tell you whether an MVP is working.
Why Most Startup Dashboards Measure the Wrong Things
Vanity metrics are seductive because they go up. More visitors, more sign-ups, more page views — all feel like progress. Most of them don't tell you whether your product is delivering value.
The test for a useful metric: does it change how you'd make a decision? If your bounce rate goes from 60% to 55%, what do you do differently? Usually nothing. If your Day 7 retention drops from 30% to 15%, that's an emergency. One of these is a useful metric.
The Six Metrics That Matter at MVP Stage
1. Activation Rate
Definition: The percentage of new sign-ups who complete the "aha moment" — the first action that represents genuine product value.
Why it matters: Activation is the bridge between acquiring a user and delivering value to them. Most MVPs have an activation problem: users sign up, look around, and leave before they experience what makes the product worth using.
How to measure: Define your activation event specifically (e.g., "created first project," "connected first account," "made first booking"). Track: Signed Up → Activated.
Benchmark: Highly product-dependent, but under 30% activation usually signals an onboarding problem worth addressing immediately.
2. Day 7 and Day 30 Retention
Definition: The percentage of users who signed up in a given period and are still active 7 (or 30) days later.
Why it matters: Retention is the single most predictive metric for long-term product success. You can't grow a leaky bucket. If users aren't coming back, acquisition just accelerates churn.
How to measure: Cohort analysis in PostHog or Mixpanel. Group users by sign-up week; measure what percentage had any active session 7 and 30 days later.
Benchmarks (B2B SaaS): Day 7 above 25% is reasonable; Day 30 above 15% suggests real retention. Consumer products typically need higher retention to be viable.
3. Core Action Rate
Definition: The percentage of active users who perform the primary action your product is designed for, in a given period.
Why it matters: This is your engagement signal. If users are logging in but not using the product's core capability, you have a UX or value communication problem.
Example: For a project management tool, core action = created or updated a task. For a marketplace, core action = browsed listings or made a booking.
4. Net Promoter Score (NPS) — Simple Version
Definition: Ask users "On a scale of 0–10, how likely are you to recommend this to a colleague?" Promoters (9–10) minus Detractors (0–6) = NPS.
Why it matters: NPS is a leading indicator of word-of-mouth growth and customer satisfaction. It's also a forcing function — a low score tells you users are not yet delighted, even if they're still using the product.
How to measure: A single-question survey sent to users who've been active for at least 2 weeks. Typeform or a simple in-app modal works at MVP scale. Aim to survey 20–30 users before drawing conclusions.
5. Monthly Recurring Revenue (MRR) and Churn
Definition: MRR is the predictable revenue you can count on each month from active subscriptions. Churn is the percentage of that MRR lost each month from cancellations or downgrades.
Why it matters: For any product with a subscription component, these are the financial signals that matter. Revenue momentum (MRR growing) tells you market demand exists. High churn tells you you're filling a leaky bucket.
Simple calculation:
- MRR = (number of subscribers) × (average monthly revenue per subscriber)
- Monthly churn rate = (MRR lost to cancellations) / (MRR at start of month)
Benchmark: Under 3% monthly churn is healthy for early-stage SaaS. Above 8% is typically unsustainable.
6. Time to Value
Definition: How long it takes from sign-up to a user completing the activation event (their first meaningful use of the product).
Why it matters: The faster a user gets to value, the more likely they are to activate, retain, and recommend. A long time-to-value (days or weeks) is often a sign of a friction-heavy onboarding experience.
How to measure: In your analytics tool, measure the median time between "Signed Up" and your activation event.
What to do with it: If median time-to-value is over 24–48 hours for a consumer product or 3–5 days for a B2B tool, look for where time is being lost. Usually it's onboarding friction or a confusing first-run experience.
How to Set Up a Simple Weekly Metrics Review
Once a week, spend 10 minutes reviewing:
- New sign-ups this week (acquisition)
- Activation rate for this week's cohort (are new users activating?)
- Day 7 retention for last week's cohort (are last week's users returning?)
- Core action count (is the product being used?)
- Any new MRR or churn (what did revenue do?)
This review should surface one clear priority: the metric with the most room for improvement and the clearest lever to pull. Then the next sprint addresses that one thing.
Simple, repeatable, and directly connected to whether the product is working. That's all a metrics review needs to be.
If you're launching soon and want to make sure the right tracking is in place from day one, let's talk.