Build a Public API That Customers Actually Use
Public API Strategy for Your New SaaS
Goal: Ship a public API that 5–15% of your paying customers adopt within 6 months — and that becomes a moat the longer it runs. Avoid the failure mode where you ship a half-baked "API" that's really just an internal-cleanup endpoint with auth, no docs, and no SDK, and that nobody actually integrates with.
Process: Follow this chat pattern with your AI coding tool such as Claude or v0.app. Pay attention to the notes in [brackets] and replace the bracketed text with your own content.
Timeframe: Spec + first read endpoint shipped in 1 week. Auth + write endpoints + docs + SDK skeleton in week 2. First paying customer integration live in week 3–4. By month 3 you should have 3–5 customers in production.
Why Most Indie SaaS APIs Fail
Three failure modes hit founders the same way:
- The API is the internal API exposed. The team didn't design a public surface — they slapped auth on top of the same endpoints the front-end uses, including the badly-named ones, the unstable ones, and the ones that return your internal database column names. Customers integrate, you refactor, they break. The API loses trust within 60 days and never recovers it.
- No tier separation between "API exists" and "API is supported." Free-tier customers hit the API as hard as paying customers, support tickets pile up, and you end up doing $500/month of unpaid infrastructure work for $0/month accounts. Worse: your top customer can't get rate-limited above the noise.
- No SDK, no docs, no events, no webhook receiver. A REST API on its own is the table stakes. The integration stack — TypeScript SDK, OpenAPI schema, examples in 3 languages, webhook signing, replay tooling, sandbox environment — is what gets your API actually used. Without it, only the most determined customer integrates, and they integrate poorly.
The version that works is structured: design the public surface separate from your internal one, version it properly, gate it to paying tiers, ship the SDK and docs as part of v1, and treat the API as a first-class product with its own changelog.
This guide assumes you have already done PostHog Setup (you cannot improve what you cannot measure — API adoption needs telemetry), have Usage-Based Billing considered (API quotas often map to a usage tier), and have completed Pricing Page (API access is usually a tier gate).
1. Decide What the API Is For
Before any code, decide who the API is for and what jobs they're doing. Most APIs fail because the founder built one without answering this.
I'm building [your product] at [your-domain.com]. The product does [one-sentence description]. I'm planning a public API. Help me decide what the API is for.
Three categories of customer use case:
1. **Read-only data export** — customers pulling their data into a warehouse (BigQuery / Snowflake / Postgres) or a BI tool (Metabase / Looker)
- Right resources to expose: list endpoints with cursor pagination, point-in-time exports, filterable query parameters
- Wrong things to expose: write endpoints, complex computed metrics that change
2. **Workflow automation** — customers building Zapier/n8n/Make automations or internal scripts that fire on events in our product and take actions
- Right resources to expose: webhooks (event-driven), idempotent write endpoints, a stable event schema
- Wrong things to expose: every internal CRUD operation; expose only the actions customers actually want to automate
3. **Embedded / OEM** — customers building [their product] on top of [our product] as a backend
- Right resources to expose: a full read+write API with strong rate limits, multi-tenant scoping, and a service-account auth model separate from user auth
- Wrong things to expose: end-user UI primitives that won't fit the customer's frontend
For my product specifically, output:
- Which of the three categories is the dominant use case for our buyer (the indie SaaS founder integrating with us, the agency client, the data-team customer, etc.)
- The 3-5 specific jobs customers will use the API to do (be concrete: "export every conversation from the last 30 days into Snowflake nightly")
- The 3-5 things we should NOT expose in v1 (the things customers will ask for that should wait — internal flags, admin features, AI-prompt-tuning endpoints, etc.)
- What "API access" means as a pricing gate: is it included on every paid tier, or only on specific tiers, and what's the rate limit on each
Three principles that prevent v1 disasters:
- The public API is a different product than your internal API. It needs its own identity, its own version, its own naming conventions, its own changelog. Even if v1 of the public API reuses your internal endpoints under the hood, the public-facing schema and URLs must be designed separately.
- You can always add endpoints. You cannot easily remove them. Once a customer integrates against
GET /v1/conversations?ordered_by=foo, that parameter is your contract for the next 5 years. Ship the smallest surface that does the dominant job, not the largest surface. - Rate limits are a feature, not a punishment. Properly tiered rate limits let you offer the API on every paid tier without your top customer getting starved by free-tier abuse.
2. Design the Resource Model
Pick the nouns. Pick them carefully. Most of the cost of API maintenance is fighting names you regret.
For my product, design the public REST API resource model. My core domain entities are [list 3-7 entities — e.g., for a CRM: Contact, Company, Deal, Activity, Note]. My event types that fire when things change are [list 5-10 — e.g., contact.created, deal.stage_changed, note.added].
For each entity:
1. Pick the public name (often different from the internal name — e.g., internal "user_profile" might be public "user")
2. List the fields exposed in v1 (start small — exclude internal-only flags, audit columns, derived fields that don't have stable definitions)
3. List the fields explicitly NOT exposed and why (so future-me doesn't second-guess)
4. URL pattern: collection (GET /v1/contacts) and item (GET /v1/contacts/{id})
5. ID format: I want opaque prefixed IDs like "ctc_01HXX..." for contacts (not auto-increment integers, not raw UUIDs — Stripe-style for readability and to prevent accidental cross-resource ID collisions)
6. Filter parameters supported on the list endpoint (start with 2-3, not 20)
7. Include parameter for related resource expansion (e.g., ?include=company on a contact endpoint returns the company inline). Document which expansions are valid in v1.
For each event:
1. Public event name in dot.notation (consistent: noun.verb_past_tense, e.g., "contact.created" not "contact_created" not "ContactCreate")
2. Payload shape (top-level keys: event_id, event_type, occurred_at, data{}, account_id, livemode boolean)
3. Whether the event fires for both API-driven and UI-driven changes (it should — customers expect parity)
Output the schema as a single OpenAPI 3.1 document (YAML preferred) plus a separate event catalog markdown file. No implementation yet.
A few hard-won rules:
- Use prefixed opaque IDs.
ctc_01HXX0M4...reads better in logs, makes cross-resource debugging easier, prevents accidental ID-mixing bugs, and lets you change the underlying ID format later without breaking customers. Stripe, Slack, and Linear all use this pattern. The minor cost of generating them once dwarfs the maintenance cost of integer IDs forever. - Cursor pagination, never offset pagination. Offset pagination breaks under concurrent writes (the same record appears twice or skips entirely as new records insert). Cursor pagination is monotonic and stable. Customers will thank you. Spec:
?limit=100&cursor=opaque_string. - Don't ship a "search" endpoint in v1. Search is its own problem domain — fuzzy matching, ranking, language handling, latency, indexing strategy. Ship list-with-filters first; revisit search when 3+ customers explicitly ask for it.
- Field naming consistency. Pick
created_at(snake_case timestamps as ISO 8601 strings) and use it everywhere. Inconsistency is what makes APIs feel amateur. AI tooling will inherit your inconsistencies if you don't catch them at this step.
3. Pick the Auth Model
Customers integrate with auth working or they don't integrate at all. Choose deliberately.
Help me design the API auth model for [your product]. My customers are [describe — e.g., "indie founders integrating into their own internal scripts" or "agencies integrating on behalf of their clients" or "platform partners building OEM products on top of us"].
Compare three auth options for my v1:
**Option A: Personal API keys** (each user generates a token from their account settings)
- Best for: customers who use the API for personal scripts, internal automations, or simple BI exports
- Permissions: scoped to the user's role and account
- Lifecycle: user can rotate or revoke from the UI
- Failure mode: if the user leaves the company, their key dies — bad for shared automations
**Option B: Account / workspace API keys** (admin generates a key tied to the workspace, not a user)
- Best for: shared automations, internal scripts that should survive employee turnover
- Permissions: scoped to a role label on the key (read-only, full-access, custom-scoped)
- Lifecycle: admin rotates or revokes; multiple keys allowed per workspace
- Failure mode: stolen workspace key has full access to the workspace
**Option C: OAuth 2.0 (authorization code or client credentials)**
- Best for: third-party integrators (Zapier, n8n, OEM platforms) acting on behalf of users they don't employ
- Permissions: scopes negotiated at consent time
- Lifecycle: refresh tokens, expirations, revocation flow
- Failure mode: significantly more complex to implement and operate
For my customer profile, recommend a single v1 auth choice. Most indie SaaS in 2026 should ship Option B (workspace API keys) for v1 and add Option C only when an integrator-class customer (Zapier app submission, OEM partner with multiple end-users) demands it.
Output:
1. The chosen v1 auth model with rationale
2. The token format (recommend prefixed: sk_live_..., sk_test_..., so customers can't accidentally paste a live key into a test environment)
3. The permission scopes (start small — read, write, admin — not 47 fine-grained scopes)
4. The HTTP header to send: Authorization: Bearer [token]
5. The error response when auth fails (RFC 7807 problem+json with explicit "type" so SDKs can handle it)
6. A schema for the api_keys table in our database (id, prefix, hashed_token, account_id, created_by_user_id, name, scopes, last_used_at, revoked_at)
7. Token storage: SHA-256 hash in DB, never the raw token. Show last 4 characters in UI.
Generate the code for the auth middleware in [Next.js / SvelteKit / Hono / your framework] that validates the token and attaches the resolved account to the request context.
Three things that will save you future grief:
- Separate test and live tokens by prefix.
sk_test_...andsk_live_...— modeled on Stripe. Customers paste tokens into env files; the prefix prevents catastrophic mistakes. - Hash tokens at rest, return only at creation. Store SHA-256 hash of the secret, never the raw value. Show "sk_live_••••••••abc1" in the dashboard. If a customer loses a token, they generate a new one — they cannot retrieve the old one.
- Track
last_used_atper token. Customers will ask "is this still in use?" before rotating. You answer in 5 seconds.
4. Implement Rate Limits and Quotas
Rate limits separate "API exists" from "API is supported." Get this right or your top customer pages you on launch day.
Design the rate-limit + quota system for the public API.
Three layers:
1. **Per-token rate limit** — sliding-window or token-bucket, applied at the token level
- Recommend: 100 requests / minute on read endpoints, 30 / minute on write endpoints, for the default paid tier
- Higher tier (e.g., the $X/mo Pro tier): 500/min read, 150/min write
- Highest tier or custom contract: negotiated, often 5000/min+
2. **Per-account daily quota** — total request count per day, per account (separate from rate limit)
- This catches abuse the rate limit can't (a customer making 100/min for 24 hours = 144,000 requests; that's a different problem)
- Default tier: 50,000/day. Pro: 500,000/day. Enterprise: unlimited or contractual.
3. **Webhook delivery rate** — separate budget for outbound webhook deliveries (so a customer with a slow webhook receiver doesn't starve their inbound API budget)
Implement using [Upstash Redis / Redis on Marketplace / Vercel Runtime Cache / your store of choice]. The rate-limit decision must be made in <10ms because it sits in front of every API call.
Response headers on every API response:
- X-RateLimit-Limit: 100
- X-RateLimit-Remaining: 87
- X-RateLimit-Reset: <unix_timestamp_when_window_resets>
- X-Request-ID: <request_uuid_for_support_correlation>
When rate limit is exceeded, return HTTP 429 with:
- A problem+json body explaining the limit
- A Retry-After header (seconds)
- A documentation URL
When daily quota is exceeded, return HTTP 429 with a different error type so SDKs can distinguish.
Output:
- The middleware code that enforces both layers
- The dashboard UI snippet showing the customer their current usage and remaining quota
- The alert config: customers should get an email when they hit 80% of daily quota for two days in a row
- The pricing-page copy explaining the rate limits per tier (so the customer can self-serve the answer to "is this enough for my use case?")
Hard rules to internalize:
- Rate limits must return useful information. A 429 with no
Retry-Afterand no remaining-budget header is hostile and forces customers to write their own backoff logic. Send the metadata; let SDKs handle backoff automatically. - Sliding-window beats fixed-window. Fixed-window rate limits cluster traffic at window boundaries — a customer who hits 100 at 12:00:00 and 100 more at 12:00:30 effectively did 200 in 30 seconds. Sliding-window catches this. Implementation cost is small; UX gain is large.
- Quota and rate limit are different things. Conflating them confuses customers. Document them separately on the pricing page and in the docs.
5. Ship Webhooks Properly
Webhooks are the second half of any usable API. Most indie webhooks are unsigned, undocumented, with no retry policy, and customers spend a week implementing receivers that should have taken an hour.
Design the outbound webhook system. Cover:
1. **Event sourcing**: when an entity changes via UI or API, fire an event into our internal event bus. The webhook delivery layer subscribes to the bus and fans out to subscribed customers.
2. **Subscription model**: customers create webhook endpoints via the API or UI. Each endpoint has:
- URL
- Subscribed event types (allowlist; "*" allowed for "everything")
- A signing secret (shown once at creation, hashed in DB)
- Active/disabled state
3. **Delivery payload**:
- HTTP POST to the customer URL
- Headers: X-Signature: t=<unix_ts>,v1=<hmac_sha256> (modeled on Stripe), X-Event-ID, X-Event-Type, X-Delivery-Attempt
- Body: the event JSON (event_id, event_type, occurred_at, data, account_id, livemode)
- Timeout: 5 seconds — anything slower we treat as failed
- Success: customer responds 2xx; we mark delivered
- Failure: any non-2xx, timeout, DNS error, TLS error — we mark failed and schedule retry
4. **Retry policy**: exponential backoff. Spec:
- Attempt 1: immediate
- Attempt 2: +1 minute
- Attempt 3: +5 minutes
- Attempt 4: +30 minutes
- Attempt 5: +2 hours
- Attempt 6: +12 hours
- Attempt 7-10: +24 hours each
- After 10 failed attempts spanning ~3 days, disable the endpoint and email the customer
5. **Replay tool**: customers must be able to manually replay any past event from the dashboard or API. This is the single most-requested webhook feature — without it customers don't trust the system.
6. **Signature verification example** in TypeScript, Python, Go, Ruby — published in the docs. Pre-rolled in the SDK.
7. **Webhook deliveries log**: store every delivery attempt with timestamp, response code, response body (truncated to 4KB), latency. Show the last 30 days of deliveries in the customer dashboard.
Output:
- The schema for webhook_endpoints, webhook_deliveries, webhook_events tables
- The delivery worker code (recommend [Vercel Queues / BullMQ / your queue])
- The signature verification example for the docs
- The replay endpoint: POST /v1/webhook_endpoints/:id/deliveries/:event_id/replay
Three details that take webhooks from "exists" to "trusted":
- Webhook signing is non-negotiable. Without it, customers are taking unsigned input from the public internet and trusting it. Ship HMAC-SHA256 with a per-endpoint secret. Document the verification snippet in 4 languages.
- Timestamps in the signature prevent replay attacks.
t=<unix_ts>,v1=<hmac>style means the customer can reject events older than 5 minutes. Stripe popularized this; copy it. - A "deliveries" log with manual replay is the difference between a webhook system and a webhook product. Customers will integrate carelessly the first time, lose events to a bug in their receiver, and need to recover. The replay tool is what saves them — and saves you the support burden of telling them you can't.
6. Generate the SDK
A REST API without an SDK is a 50% product. Generate one — don't write it by hand.
Generate a TypeScript SDK for the public API from the OpenAPI 3.1 spec we wrote in step 2.
Use [openapi-typescript-codegen / orval / openapi-fetch / Speakeasy / Stainless] — pick one and explain why for my use case. For most indie SaaS in 2026, Stainless or Speakeasy are the highest-quality generated SDKs but require config investment. openapi-fetch is the simplest path to a working SDK in an afternoon.
Output:
- The chosen tool with a 3-bullet rationale
- The generation config (input: openapi.yaml; output: ./sdk/typescript/)
- The package.json for the SDK npm package (name: [your-company]-node, scoped, MIT license, exports for both CommonJS and ESM)
- The README.md showing 5 examples: client init, list resources with pagination, create a resource, handle errors, verify a webhook
- A Node.js test that hits the live test environment and validates a round trip
- The publish workflow (GitHub Actions: on tag push to main, run tests, publish to npm with provenance)
Then generate skeletons for Python (using openapi-python-client or similar) and Ruby. Those don't need to be fully published in v1 — TypeScript first, others if a paying customer asks.
A few SDK realities:
- The SDK is a marketing surface. A polished SDK with great docs converts more developer customers than any landing-page copy. The README is read more carefully than your homepage.
- TypeScript first, always. 70% of new SaaS integrations in 2026 are TypeScript or Python. Ship TS first, Python second if customer demand exists.
- Don't hand-write SDKs for multiple languages. It always rots. Generated SDKs from the OpenAPI spec stay in sync with the API automatically — and when v1.1 ships, you regenerate, not rewrite.
7. Write Docs That Don't Suck
Docs are 50% of API adoption. Most indie API docs are an OpenAPI page with no narrative.
Design the docs structure for the public API. Use [Mintlify / Fern / Scalar / Stoplight / Docusaurus] as the docs tool — pick one.
The docs site needs four sections:
1. **Getting Started** (5-10 minute path from zero to first successful API call)
- What the API does (one paragraph, founder voice)
- Get an API key (link to dashboard)
- Make your first call (curl example, then Node example, then Python)
- What you can build with it (3 concrete examples linking to recipes)
2. **Authentication** (one page)
- Token format, prefix meaning
- Storage and rotation guidance
- Rate limits and quota explanation
- Common error codes
3. **API Reference** (auto-generated from OpenAPI spec)
- Resources organized in nav by domain
- Each endpoint shows: description, parameters, request example, response example, error responses
- "Try it" widget that calls the test environment
- Per-endpoint code samples in TS, Python, Ruby, curl
4. **Recipes / Guides** (3-5 narrative tutorials)
- Each recipe is "I want to do X" → step-by-step code
- Examples: "Export every conversation to Snowflake nightly", "Build a Slack bot that posts deal-stage changes", "Sync customers from our product to HubSpot"
- These are what get linked from blog posts, tweets, and Stack Overflow — they drive adoption more than any reference doc
Plus a separate **Changelog** that shows every API change in reverse chronological order, with a "breaking" tag where applicable.
Output:
- The docs nav structure as YAML / TS config
- The first recipe written end-to-end (pick the most common customer use case from step 1)
- The auto-generated API reference deployed at /api/v1
- A Slack/Discord channel link for customer questions (be honest about response time)
Patterns that work:
- Recipes outpace reference docs in adoption. A "How do I export my data?" recipe gets 10x the traffic of the underlying GET endpoint reference. Write the recipes; let the reference auto-generate.
- Code samples in multiple languages, on every endpoint. The customer who only knows Ruby leaves your site if you only show TypeScript. Generated SDK + good docs tooling makes this cheap.
- A real changelog, not a "release notes" PDF. Every breaking change tagged. Every additive change documented. Customers who integrate without a changelog cannot upgrade safely; they don't upgrade; they stop trusting the API.
8. Version and Evolve
You will need to change the API. Plan for it in v1 so you don't paint yourself into a corner.
Design the versioning strategy.
Two reasonable approaches for indie SaaS in 2026:
**A. URL-based versioning (Stripe-lite style)**: every endpoint lives under /v1/. When a breaking change is needed, ship /v2/ alongside /v1/, run both in parallel for at least 12 months, and migrate customers gradually with deprecation headers.
**B. Header-based date pinning (Stripe-classic style)**: every API request sends "API-Version: 2026-04-29". The server applies version-specific request/response transforms. Allows additive evolution without URL bumps. Heavier engineering investment.
For an indie SaaS in v1: choose A. The complexity of header-based versioning only pays off at scale and with a dedicated platform team. Use URL versioning, set a clear policy that you will not break /v1/ without 12 months of notice, and document that policy on the docs homepage.
Output:
1. The version policy (one page in docs):
- We will not introduce breaking changes to /v1/ without 12 months of notice
- Additive changes (new endpoints, new optional fields, new event types) are not breaking
- Removed fields, renamed fields, changed field types, changed semantics are breaking
- All breaking changes are announced in the changelog and mailed to all active integrators
2. A "deprecation header" pattern: when an endpoint is being phased out, return Deprecation: <true|date> and Sunset: <RFC 8594 date>
3. A "preview" pattern: customers who want early access to a new endpoint behind a feature flag can opt in via header (e.g., API-Preview: 2026-05-15-new-endpoint), so you can ship and iterate before committing to v1 inclusion
The single most important commitment to put in writing: "any field we ship in v1 will continue to work for 12 months after deprecation announcement." Customers integrate when they trust this commitment.
Three things to commit to externally and internalize:
- Additive is not breaking. Add new endpoints, new optional fields, new event types — those are safe. Customers won't be affected.
- Renaming is breaking even if "obviously equivalent."
created_at→created_at_utcbreaks everyone. Don't do it. - The 12-month deprecation window is the real commitment. Customers integrate based on trust. A pattern of forced migrations within a year ends adoption.
9. Instrument Adoption and Health
You can't grow what you can't see.
Set up the observability for the public API. Three dashboards:
**1. Adoption dashboard** (for me, weekly review):
- Active API tokens (used in last 7 / 30 days)
- Requests per day (overall + per top 20 customers)
- New customer first integration (time from token creation to first non-test request)
- Top 10 endpoints by call volume
- SDK install events (npm download counts via npm-stat or Bundlephobia API)
**2. Health dashboard** (for me, daily review and pager-on-call):
- Error rate by endpoint (5xx and 4xx separately — 4xx is customer mistakes, 5xx is mine)
- p50, p95, p99 latency by endpoint
- Webhook delivery success rate (24h rolling)
- Rate-limit-hit rate (high values may mean limits too tight or a customer needs a tier upgrade — surface as a sales signal)
**3. Customer-visible dashboard** (in the product):
- Per-account: requests today, today's quota, error rate on their requests in last 24h
- Webhook delivery log (last 30 days)
- API key list with last-used-at and IP origin
- Direct link to their support thread if they have an open ticket about API behavior
Implement using [PostHog / Datadog / Grafana / Better Stack / your observability provider]. Output the queries / dashboards-as-code.
Add a single "API health" line to the existing [PostHog Setup](posthog-setup-chat.md) reporting so I see API trends weekly without opening another tool.
The signal you most want: "customers approaching their quota are buying signals." A customer hitting 80% of daily quota three days in a row is the most qualified upgrade lead in your CRM. Wire the alert directly to your sales workflow.
What Done Looks Like
By end of week 4 after launch:
- One paying customer in production with a real workload running through the API
- Public docs site at /docs or docs.[your-domain].com with reference + getting-started + at least one recipe
- Published SDK on npm with provenance and one tagged release
- Webhook system with signed deliveries, replay UI, and a deliveries log
- Per-token rate limits and per-account daily quotas wired in
- Observability dashboards for adoption + health
- Changelog page with at least the v1 launch entry
By end of month 3:
- 5–10 customers integrated, with at least one driving meaningful API volume
- One or two recipes informed by what real customers are actually doing
- A version policy customers cite when asking about future breakage
- The first request for an endpoint that didn't ship in v1 — answered with either "shipping next month" or "tracked in changelog backlog"
Common Pitfalls
- Shipping the API without a paying-customer use case. Do not build the API for a hypothetical customer. Find the 1–2 paying customers who have asked, build for their workload, ship, then market to the next 10.
- Letting free-tier accounts use the API at the same rate as paid. This kills your unit economics and your support quality. Gate API access to paid tiers, or give free tier a tiny rate limit (e.g., 10 req/min, 1000/day) that's enough for evaluation but not production.
- Not publishing a status page. API customers expect uptime visibility. Use [BetterStack / Statuspage / Instatus] and link from docs.
- Ignoring the SDK and treating "we have an OpenAPI spec" as the SDK. Customers will not generate their own client. Ship the npm package.
- Writing docs once and never updating. Schedule a quarterly docs audit. Out-of-date docs are worse than no docs because they teach customers to mistrust the surface.
- Not separating internal and public APIs. This is the worst long-run mistake. Every minor refactor of internal endpoints risks breaking your external customers. Even if the implementation reuses internal code paths, the public schema, URLs, and naming must be designed and stabilized separately.
Where the Public API Plugs Into the Rest of the Stack
- Pricing Page — API access is usually a tier gate; rate-limit copy lives on the pricing page.
- Usage-Based Billing — once API volume is the metered axis, usage-based billing is what makes API quotas economically meaningful.
- Partner Integration Program — the public API is what makes a partner integration program possible.
- Feature Flags — API preview headers map cleanly onto feature flags.
- PostHog Setup — adoption telemetry plugs into the existing analytics layer.
- Customer Support — API customers ask different questions than UI customers; route accordingly.
- Incident Response — API customers expect tighter SLAs and better post-mortems.
- Changelog & Roadmap — the API changelog lives next to the product changelog with the same cadence.
What's Next
Once the v1 API is in production with paying customers, the limiting factor stops being shipping endpoints and becomes API product management — figuring out what to add next, how to deprecate things gracefully, and how to grow from "we have an API" to "the API is a product." Read every customer integration as a source of feedback. Add the recipe before adding the feature. Talk to the integrators monthly.
Build the discipline now. The team that ships a polished, well-supported v1 API in 2026 will compound integrations into a moat by year 2.