Customer Feedback Surveys: NPS, CSAT, PMF, and What Founders Should Actually Run
Customer Feedback Survey Strategy for Your New SaaS
Goal: Run a small number of well-designed surveys that produce actionable insight — and ignore the survey instruments that produce vanity scores or noise. Build a feedback program that catches churn risks early, identifies expansion opportunities, validates product-market fit signals, and preserves customer trust by not over-surveying.
Process: Follow this chat pattern with your AI coding tool such as Claude or v0.app. Pay attention to the notes in [brackets] and replace the bracketed text with your own content.
Timeframe: First survey live in 1 day. Sequenced cadence (PMF + CSAT + NPS) running in 1 week. First quarterly review baked into the calendar from launch onward. Sustained insight loop visible by month 3.
Why Most Founder Survey Programs Are Theater
Three failure modes hit founders the same way:
- The "let's run NPS quarterly because everyone does" plan. Founder ships an NPS survey. Gets 23 responses on 800 sent. The score is +47. Founder feels good and shows the slide to investors. Six months later, churn is climbing and the NPS is still +47 because the only respondents are happy customers (response bias). NPS without follow-up qualitative analysis is a vanity metric pretending to be a leading indicator.
- The 14-question survey blast. Founder writes a "comprehensive" feedback survey covering product satisfaction, feature requests, willingness to recommend, pricing perception, persona, role, and team size. Sends it to all customers. Response rate is 4%; quality is low; respondents skew toward power users; the data is uninterpretable. Fewer questions, more often, segmented carefully — that's what works.
- No follow-up loop. Founder collects feedback, dumps responses into a spreadsheet, never closes the loop with respondents, never publishes what changed. Customers learn that feedback doesn't go anywhere; survey response rates degrade quarter-over-quarter. The data exists, but the trust is gone.
The version that works is structured: pick the right survey for the right job, ask 1-3 questions max, segment by lifecycle stage, follow up qualitatively with anomalies, close the loop publicly, and audit response rates as a leading indicator of trust.
This guide assumes you have already done PostHog Setup (you'll trigger surveys from product events), have an Onboarding Email Sequence (some surveys live in email), and have shipped Customer Support (CSAT lives in support workflows).
The Four Surveys That Actually Matter
Every other survey is noise. These four cover the jobs you actually need data on.
| Survey | When | Purpose | Format | Response goal |
|---|---|---|---|---|
| Sean Ellis PMF | Pre-PMF, after first month using product | Validate product-market fit | 1 question, multiple choice | 30+ responses to read |
| In-product NPS | Quarterly, to engaged users only | Track relationship trajectory | 1 question + open follow-up | 15-25% response rate |
| CSAT after support | After every closed support ticket | Track support quality | 1 question + 1 follow-up | 30%+ response rate |
| Churn / cancel exit | When a customer cancels | Learn why customers leave | 1 multiple-choice + 1 open | 80%+ response rate (it's the cancel form) |
Skip:
- Comprehensive "tell us everything" surveys — they punish customers and produce uninterpretable data
- Off-cycle "feature priority" surveys — these belong in Customer Discovery Interviews, not surveys
- Surveys to free-tier users with no engagement — response is junk
- More than 4 surveys per customer per year — over-surveying erodes trust and response quality
1. Run the Sean Ellis PMF Survey (Pre-PMF Only)
The single best PMF question ever written. Created by Sean Ellis (Dropbox, LogMeIn growth lead). Ask only once per cohort, only to customers who've actually used the product.
Help me set up the Sean Ellis PMF survey for [your product] at [your-domain.com].
The single question:
> "How would you feel if you could no longer use [your product]?"
> - Very disappointed
> - Somewhat disappointed
> - Not disappointed
> - I no longer use [your product]
Plus one optional open follow-up: "What would you use as an alternative?" — only shown to "Somewhat disappointed" and "Not disappointed" responders.
The PMF threshold:
- 40%+ "Very disappointed" = strong product-market fit signal (Sean Ellis's original benchmark)
- 25-39% "Very disappointed" = approaching PMF; understand what's holding back the others
- Below 25% = not yet PMF; positioning or product needs work
Critical setup rules:
1. **Only survey activated users.** A customer who signed up and never used the product can't answer this question meaningfully. Trigger when: user has hit the activation event AND has been active in the last 30 days.
2. **Only survey once per user.** Track who's been asked; never re-survey. Re-asking biases results.
3. **Wait 30+ days post-activation.** Asking too early ("How would you feel..." after one week) catches customers who haven't yet built the workflow around your product.
4. **In-product placement.** Trigger via a small modal on a relevant screen (the dashboard or a feature page where they're engaged). Email surveys for PMF underperform — engaged users in-product give better signal.
5. **Show response counts as a fraction of recipients, not as a fraction of respondents.** "47 of 200 surveyed said Very Disappointed = 23.5%" not "of the 73 respondents, 47 said Very Disappointed = 64%". The latter is misleading — non-responders are a signal too.
Output:
1. The survey trigger logic in code (PostHog or your tool)
2. The exact question wording with my product name
3. The cohort definition (which users get surveyed)
4. The dashboard for tracking the rolling % "Very disappointed"
5. The qualitative analysis loop: read every "Very disappointed" follow-up answer to build the customer-success playbook; read every "Somewhat disappointed" to find the product gap
Then handle the corner case: if I'm post-PMF (>40% Very disappointed sustained), this survey adds little incremental value. Replace with quarterly NPS instead.
Three principles:
- PMF is a binary state, not a vanity score. Below 40% Very Disappointed, you don't have it; iterate. Above 40%, you have it; protect it. Treat the number as a state check, not a metric to optimize.
- The qualitative answers are the value. The quantitative score tells you if PMF exists; the open-text answers tell you what to fix or amplify. Read every one.
- Stop running this survey once you've crossed the threshold. It's a diagnostic, not a tracking metric. Re-running it post-PMF wastes customer attention.
2. Run In-Product NPS (Post-PMF Only, to Engaged Users)
NPS is the most-misused survey in B2B SaaS. Done right, it's a useful relationship-trajectory indicator. Done wrong, it's vanity theater.
Help me set up in-product NPS for [your product]. Critical: NPS is for established products with a paying customer base. Don't run this pre-PMF (use Sean Ellis instead).
The single question:
> "On a scale of 0-10, how likely are you to recommend [your product] to a friend or colleague?"
Plus one mandatory open follow-up:
- For 9-10 (Promoters): "What's the main reason for your score?"
- For 7-8 (Passives): "What would have made you give a higher score?"
- For 0-6 (Detractors): "What's the most important thing we could improve?"
NPS = % Promoters - % Detractors. Range: -100 to +100. SaaS benchmarks 2026:
- B2B SaaS average: 30-50
- Best-in-class: 60-80
- Below 30: relationship trajectory is weak; investigate
Setup rules:
1. **Quarterly cadence.** Once every 90 days per user, not more. NPS asked monthly degrades response quality and customer trust.
2. **Only to engaged users.** Same definition as PMF: hit activation event AND active in last 30 days. Surveying inactive users gives you "I don't really use this" responses that don't represent your active customer base.
3. **In-product placement.** Modal at session start, dismissable. NOT a blocking interrupt. Customers who dismiss don't count as Detractors — they count as non-responders.
4. **Exclude trial users.** NPS is a paying-customer relationship metric. Trial users haven't committed yet; their score doesn't predict the same things.
5. **Segment results.** A blended NPS hides cohort dynamics. Track NPS separately for:
- Tier (Free / Pro / Business / Enterprise)
- Tenure (< 3 months, 3-12 months, 12+ months)
- Segment (use case / industry / size)
6. **Track trajectory, not absolute score.** A score of 42 last quarter and 38 this quarter means something. A single 42 in isolation tells you nothing. Quarter-over-quarter change is the signal.
Output:
1. The survey trigger logic and cohort definition
2. The PostHog event payload: { user_id, score, response_text, tier, tenure_days, segment, surveyed_at }
3. The dashboard with NPS by segment, by tenure, by tier — and the QoQ trend
4. The qualitative analysis cadence: read all Detractor responses within 7 days of arrival; route to support / product / sales as appropriate
5. The trigger thresholds: an individual Detractor response triggers an outreach email from the founder within 48 hours
Then handle the closed-loop critical bit: every Detractor gets a personal follow-up. This converts ~30% of Detractors into Passives or Promoters within a quarter. Skip this and you're collecting NPS for show.
Three principles I've watched founders re-learn:
- NPS Detractors are gold. They told you what's wrong; most customers wouldn't have. A Detractor outreach email — "I saw your feedback, want to chat?" — is the highest-leverage 15 minutes you'll spend that month.
- Don't optimize for the number. Founders who optimize NPS by surveying only happy users end up with high scores and rising churn. The honest score (with full sample) is the useful one.
- Track distribution, not just average. An NPS of 40 from 50% Promoters and 10% Detractors is healthier than NPS of 40 from 70% Promoters and 30% Detractors. Look at the full curve.
3. CSAT After Support Tickets
The most actionable survey. One question after every closed support ticket.
Help me set up post-support CSAT for [your product] using [Plain / Help Scout / Intercom / your support tool].
The single question (sent automatically when support marks the ticket as resolved):
> "How was your experience with this support conversation?"
> 😞 Bad / 😐 OK / 😊 Great
Plus one optional follow-up:
- "Bad" or "OK": "What could we have done better?"
- "Great": "Is there anything else we can help with?"
Why this format wins:
- 3 emoji options instead of 5- or 7-point scale (faster to answer, response rate ~30%+)
- Sent immediately after resolution (memory is fresh, response is honest)
- Open follow-up only on "Bad" / "OK" — preserves response rate
Setup rules:
1. **Trigger off ticket resolution event.** Customer marks resolved, or 48 hours after agent's last reply with no customer reply, the survey fires.
2. **Send via the same channel as the support conversation.** If the ticket was email, survey by email. If chat, survey by in-product modal next session. Channel mismatch ("we replied to your email by sending a survey to your phone") feels weird.
3. **Aggregate weekly, not monthly.** CSAT has fast feedback loops; monthly reviews lose detail.
4. **Segment by responder.** If you have multiple support people, CSAT by responder identifies coaching opportunities. Don't make this a public scoreboard; treat it as feedback for the responder.
5. **Read every "Bad" within 24 hours.** A Bad response is a save opportunity. Reach out personally; turn the ticket into a relationship.
CSAT benchmarks 2026:
- B2B SaaS average: 70-80% Great + 15-20% OK + 5-10% Bad
- Best-in-class: >85% Great
- Below 70% Great: process problem (slow response, untrained staff, broken product)
Output:
1. The integration with [your support tool]'s survey/CSAT feature
2. The trigger logic for the survey
3. The PostHog event payload to fire on response
4. The weekly review template
5. The "Bad response" save playbook: who responds, in what timeframe, with what tone
A critical principle: CSAT measures the support interaction, not the product overall. Don't conflate "the support was bad" with "the product is bad." A frustrated customer with a great support experience often becomes a happier customer than the one who never had a problem. Support is the relationship rescue; CSAT measures whether you nailed it.
4. The Churn / Cancel Exit Survey
The only survey with high response rates because it's part of the cancellation flow.
Design the churn / cancel exit survey. The customer has clicked "Cancel my subscription"; before the cancel completes, the survey runs.
Required structure:
**Question 1 (multiple choice, single select):**
> "Why are you canceling?"
> - Too expensive
> - Doesn't have a feature I need
> - Found a better alternative
> - Stopped needing the product
> - Hard to use / difficult to figure out
> - Service / support issue
> - Switching tools / company changes
> - Other (please explain)
**Question 2 (open text):**
> "Anything else we should know? (optional)"
For Q1 = "Found a better alternative":
- Conditional follow-up: "Which one?" — important data; tracks competitive movement
For Q1 = "Too expensive":
- Conditional offer (per [Reduce Churn](reduce-churn-chat.md)): "Would [a discount / pause feature / smaller tier] keep you?"
- Don't make this an obstacle to cancel; make cancellation easy regardless of survey response
For Q1 = "Doesn't have a feature":
- Conditional follow-up: "Which feature?"
- Track these in a dedicated cancel-feedback bucket; weight in product roadmap
Setup rules:
1. **Cancel must complete regardless of survey.** If the customer skips the survey, the cancellation still goes through. Forcing the survey is illegal in some jurisdictions (CASL, EU consumer law) and feels predatory everywhere.
2. **No exit modals that look like the cancellation didn't work.** Customers should never wonder if they're actually canceled.
3. **Confirmation screen with their data export option.** "You're canceled. We've kept your data for 30 days in case you change your mind. [Export your data] or [Delete now]." Per [Data Trust](data-trust-chat.md).
4. **Don't ambush with a sales call offer at this stage.** They're already gone emotionally. A 1-line "if you ever want to chat about why" is fine; a "let's get on a call" CTA is hostile.
5. **Email the founder for any cancel by a customer paying $200+/mo.** Tier matters; high-tier cancels deserve personal follow-up regardless of reason.
Output:
1. The survey UI in the cancellation flow
2. The conditional logic for follow-up questions
3. The cancel-completion confirmation screen
4. The data-export integration
5. The PostHog event payload
6. The dashboard: top cancellation reasons by month, with QoQ trend
7. The auto-route: high-tier cancels → founder email; "Found a better alternative" → competitive intel doc; "Doesn't have a feature" → product backlog with frequency count
Cancel-survey response rates are 80%+ because the survey is part of the flow. Treat the data accordingly — it's the most reliable cancel-reason signal you'll get. Don't dilute it by making cancel hard.
5. Pick the Survey Tooling
You don't need a dedicated survey platform for the four surveys above. Most teams overspend.
Help me pick the survey tooling for [your product].
Three options:
**Option A: PostHog Surveys** (built into PostHog)
- Free for indie SaaS
- In-product modal surveys, email surveys, link surveys
- Targets users by feature flag / cohort
- Response data flows into PostHog analytics for cross-correlation
- Best for: indie SaaS already using PostHog (per [PostHog Setup](posthog-setup-chat.md))
**Option B: Refiner / Sprig / Userflow surveys**
- Dedicated product survey tools
- More polish than PostHog Surveys for some UX patterns
- Pricing: $50-$300/mo
- Best for: teams that need very specific in-product survey UX
**Option C: Built-in support tool surveys** (Plain CSAT, Help Scout Beacon CSAT)
- Bundled with the support tool for CSAT
- Best for: CSAT specifically (don't try to run NPS through these)
**Option D: Email-only survey via [Email Provider](https://www.vibereference.com/backend-and-data/email-providers)**
- Send survey link via email; respondents click through to a Typeform / Tally / Google Form
- Lowest cost
- Lower response rates than in-product
- Best for: pre-launch / very early when in-product is overkill
For [your stage], recommend the right setup:
- Pre-revenue: PostHog Surveys (free) + email surveys for Sean Ellis PMF
- Indie SaaS, $1K-$50K MRR: PostHog Surveys + support-tool CSAT
- Mid-market: PostHog Surveys + Refiner/Sprig if specific needs + Plain/Help Scout CSAT
Output the tool stack with cost, plus the integration plan.
The single rule: don't over-tool surveys before you've proven the surveys themselves drive change. Founders who buy a $300/mo dedicated survey tool before running their first 5 PMF surveys produce zero insight at higher cost. PostHog Surveys is genuinely fine for 90% of indie SaaS in 2026.
6. Close the Loop — Publicly
The single highest-leverage practice in customer feedback. Most teams collect feedback, never close the loop, and watch response rates decay.
Build the "close the loop" cadence.
Quarterly, publish a "You said, we did" update. Format:
# Customer Feedback Q[N] [Year] — What We Heard, What We Did
## Top 5 things we heard from feedback this quarter
1. [Theme 1] — mentioned by [N] customers via [survey type]
2. [Theme 2] — ...
3. ...
## What we shipped or changed in response
- [Theme 1] → [specific change shipped on date X]; PR / changelog link
- [Theme 2] → ...
- [Theme 3] → "We heard this consistently and have prioritized it for Q[N+1]; estimated ship by [date]"
- [Theme 4] → "We considered this and decided not to ship — here's why" (transparency about NOT-doing builds more trust than silence)
## What's coming next
- The next quarter's roadmap with how feedback shaped it
## How to give feedback
- Link to in-product feedback widget
- Link to public roadmap if you have one
- Founder email
Distribution:
1. Email to all customers who responded to surveys this quarter (THEY made this happen; they get it first)
2. Email to all paying customers
3. Public blog post (acquisition + transparency win)
4. Tweet / LinkedIn (social proof)
5. Link from in-product changelog
Cadence: every 90 days, paired with the quarterly NPS results.
Output the template + the distribution checklist.
Three principles:
- Customers reward "you said, we did" with sustained response rates. Teams that publish quarterly feedback summaries see response rates rise, not decay, over time.
- Naming what you decided NOT to ship is as important as what you did. It signals you're listening and making deliberate choices.
- Personal email to survey respondents > generic blog post. The respondents who told you something get the personal acknowledgment; everyone else gets the public version.
7. Audit Response Rates as Trust Indicator
Survey response rates are themselves a leading indicator of customer trust.
Set up the survey response-rate audit.
Track quarterly:
- Sean Ellis PMF: response rate (denominator: surveyed; numerator: any response)
- NPS: response rate (denominator: surveyed; numerator: gave score)
- CSAT: response rate (denominator: closed tickets; numerator: rated)
- Cancel survey: response rate (denominator: cancellations; numerator: completed Q1)
Healthy benchmarks:
- Sean Ellis PMF (in-product, engaged users): 15-25%
- NPS (in-product, paying customers): 15-25%
- CSAT (post-ticket, immediate): 25-40%
- Cancel survey (in-flow): 80-95%
Trend interpretation:
- Stable response rates: program is healthy
- Declining response rates over 2+ quarters: customers are losing trust in the feedback loop
- Sharp declines: usually paired with a specific incident (recent outage, controversial product change, perceived ignoring of past feedback)
Trigger investigation when any rate drops 30%+ QoQ. Investigate by:
- Reading the most recent open-text responses for sentiment shifts
- Checking what changed in the product / company in the prior quarter
- Asking customers directly (founder email to 10 randomly-selected non-respondents: "I noticed you didn't respond to our last NPS — anything I should know?")
Output the dashboard config and the alert thresholds.
The most overlooked metric in customer feedback: the response rate itself. A score of NPS 40 from 25% response rate is more reliable than NPS 60 from 8% response rate. Trust as measured by willingness-to-respond is downstream of trust as measured by closing the loop.
What Done Looks Like
By end of week 2 of this work:
- Sean Ellis PMF survey running for newly-activated users
- In-product NPS running quarterly for engaged paying customers (or skip if pre-PMF)
- CSAT running on every closed support ticket
- Cancel exit survey running on the cancellation flow
- Survey tool picked (likely PostHog) and integrated
- Detractor outreach playbook documented
- First "you said, we did" template drafted
Within 90 days:
- 1 full quarter of NPS data with QoQ trend
- 1 published "you said, we did" customer email
- 5+ Detractor outreach conversations completed (and 2-3 saves achieved)
- Cancel-reason distribution stable enough to inform roadmap
Within 12 months:
- Survey response rates stable or rising (trust indicator)
- Quarterly cadence of feedback-driven product changes
- Customer-facing reputation for "this team listens"
- Cohort NPS trajectory visible (e.g., 12-month-tenure customers vs newer)
Common Pitfalls
- Running every possible survey. Customers tolerate 1-2 surveys per quarter. More than that and the response rates collapse. Pick the 4 that matter; skip the rest.
- Long surveys. Anything over 3 questions punishes respondents. Keep it 1-3 max.
- NPS pre-PMF. Below product-market fit, NPS scores are noise. Use Sean Ellis instead until you cross 40%.
- No follow-up loop. Collecting feedback and not acting on it is worse than not collecting it; customers feel ignored.
- Optimizing the score by surveying only happy users. Selection bias produces flattering numbers and rising churn. Survey honestly.
- Public detractor scoreboarding. Surveys are for learning; not for pressuring individual support staff. Treat results as coaching data, not performance reviews.
- Cancel surveys that block cancellation. Illegal in many jurisdictions; hostile everywhere. Cancel must complete regardless of survey.
- Quarterly surveys at month 1. Pre-PMF, surveys are noise. Wait for actual usage data first.
- Forgetting the Detractor outreach. The single highest-leverage 15 minutes after every NPS round; most teams skip it.
- No "you said, we did" cadence. Without it, respondents feel ignored and response rates decay.
Where Customer Feedback Surveys Plug Into the Rest of the Stack
- Activation Funnel — PMF survey targets activated users
- Reduce Churn — cancel survey + NPS Detractors feed the churn-risk model
- Customer Support — CSAT lives in support workflows
- Onboarding Email Sequence — survey emails follow deliverability rules
- Email Deliverability — survey emails depend on inbox placement
- PostHog Setup — instrumentation layer for all 4 surveys
- Changelog & Roadmap — "you said, we did" feeds the changelog cadence
- Customer References — Promoters from NPS are reference candidates
- Trial-to-Paid Conversion — trial-end micro-survey feeds the conversion analysis
- Product Analytics Providers — survey data lives alongside behavioral analytics
What's Next
Customer feedback surveys, done right, are one of the cheapest and highest-leverage investments in indie SaaS. Done wrong, they're vanity decoration that erodes trust quietly. The team that runs the four right surveys, closes the loop quarterly, and treats response rates as a leading indicator builds compounding customer relationships. The team that runs comprehensive monthly surveys with no follow-up watches response rates die and never figures out why.
Build the discipline now while the customer base is small enough to read every response personally. Surveys that work at 50 customers scale to 5,000 with the same shape; surveys that don't work at 50 customers don't get fixed at 5,000 — they get abandoned.