VibeWeek
Home/Grow/Full-Text Search: Ship a Search Feature That Actually Returns the Right Result

Full-Text Search: Ship a Search Feature That Actually Returns the Right Result

⬅️ Growth Overview

Search Strategy for Your New SaaS

Goal: Ship in-product search that returns the right answer fast — typo-tolerant, ranked by relevance and recency, scoped to the user's permissions, indexed in near-real-time after writes, and with a great empty state. Avoid the failure modes where founders ship WHERE name LIKE '%query%' (slow, no ranking, no typo tolerance), expose data across tenant boundaries (catastrophic privacy breach), or pick Elasticsearch on day one and spend three weeks operating a cluster instead of building product.

Process: Follow this chat pattern with your AI coding tool such as Claude or v0.app. Pay attention to the notes in [brackets] and replace the bracketed text with your own content.

Timeframe: Postgres FTS or Meilisearch + a basic search UI shipped in 2-3 days. Faceted filtering + permissions enforcement + analytics in week 1. Ranking tuning + autocomplete in week 2. Quarterly review baked in.


Why Most Founder Search Is Broken

Three failure modes hit founders the same way:

  • WHERE name LIKE '%query%'. Founder ships this for v1. It works fine on 100 rows. At 100K rows the query takes 4 seconds; at 1M it locks the database. Customer searches "invoice" and gets zero results because they typed "invioce." Customer searches "Acme" and the most-recent Acme invoice is buried because the SQL doesn't rank.
  • No tenant scoping. Founder uses an external search index (Algolia, Elasticsearch). Forgets to scope queries by workspace_id. Customer A searches "John" and gets results from Customer B's data. Privacy violation; potentially breach-disclosure-worthy.
  • Index drift. Search results are stale because the indexer fell behind. A customer creates a document; searches for it; can't find it; assumes the product is broken. Or worse: an indexer bug means deleted records still appear in search.

The version that works is structured: pick the right backend for your scale, scope every query by tenant, index in near-real-time after writes, rank by relevance + recency, and treat search as a first-class product feature.

This guide assumes you have already done Authentication (search is user-scoped), have shipped Multi-Tenant Data Isolation (search must respect tenant boundaries), have considered Search Providers (Postgres FTS / Meilisearch / Typesense / Algolia / Elasticsearch), and have shipped Roles & Permissions (RBAC) (per-record permissions filter results).


1. Decide Where Search Lives First

Before writing code, decide where the search index lives. Different backends, different trade-offs.

Help me decide which search backend to use for [my product].

The decisions:

**Option 1: Postgres full-text search** (the 60% case for indie SaaS)
- Use the database you already have
- `tsvector` + `tsquery` + GIN indexes
- Good enough for ~1M rows on standard hardware
- Trigram extension (`pg_trgm`) for typo tolerance
- No extra infrastructure
- Limitations: ranking is OK but not great; complex faceting awkward

**Option 2: Meilisearch** (modern OSS default)
- Lightweight; runs in a container
- Typo tolerance built-in; great defaults
- Fast (millisecond-scale)
- MIT-licensed; self-hostable
- Hosted as Meilisearch Cloud
- $0 (self-host) to ~$30/mo (hosted)
- Limitations: smaller community than Algolia/Elasticsearch

**Option 3: Typesense** (Algolia OSS alternative)
- Similar feel to Meilisearch
- GPL-licensed; self-hostable
- Hosted as Typesense Cloud
- Strong typo tolerance + faceting
- $0 (self-host) to ~$30/mo (hosted)

**Option 4: Algolia** (premium hosted)
- Fastest, most polished hosted search
- $1+/1K searches; can scale to $1K+/mo fast
- Extensive features (personalization, analytics, A/B test)
- Closed-source; vendor lock-in
- Pick when search is the product (e-commerce, catalog) and budget allows

**Option 5: Elasticsearch / OpenSearch** (enterprise / advanced)
- Most powerful; most complex
- Self-hosted Elasticsearch is significant ops
- AWS OpenSearch / Elastic Cloud as managed options
- Strong for log search, complex aggregations, geo, ML
- $100s+/mo at any real scale
- Overkill for indie product search

**Option 6: Vector / semantic search** (per [vector databases](https://www.vibereference.com/backend-and-data/vector-databases))
- For LLM-powered semantic search (NOT keyword search)
- Pinecone, Qdrant, Weaviate, pgvector
- Use IF your search needs to find conceptually-similar results
- Often paired with keyword search (hybrid)

**Decision criteria**:

- Index size <500K rows, simple queries: Postgres FTS
- Index 500K-10M, need typo tolerance: Meilisearch / Typesense
- Need polish at scale, budget OK: Algolia
- Complex aggregations, log search: Elasticsearch
- Semantic / LLM search: vector DB (often hybrid)

For my product, ask:
- How big is the index today? In 12 months?
- Can it run on Postgres or do I need a separate service?
- What''s my ops capacity?
- What''s the budget?

Output:
1. The chosen backend with reasoning
2. The indexed entity types (documents? users? messages? products?)
3. The expected scale at 12 months
4. The ops / budget plan

The biggest unforced error: picking Elasticsearch on day one because "we''ll need it eventually." Most indie SaaS never need Elasticsearch. Postgres FTS or Meilisearch carries you to $5M+ ARR. Operational simplicity compounds.


2. Use Postgres Full-Text Search If You Can

For most indie SaaS in 2026, Postgres FTS is the right answer. It''s already in your stack; it''s fast enough; it''s simple to operate.

Help me ship Postgres FTS.

The pattern:

**Schema with a tsvector column**:

```sql
ALTER TABLE documents
  ADD COLUMN search_tsv tsvector
  GENERATED ALWAYS AS (
    setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
    setweight(to_tsvector('english', coalesce(body, '')), 'B') ||
    setweight(to_tsvector('english', coalesce(tags, '')), 'C')
  ) STORED;

CREATE INDEX idx_documents_search ON documents USING GIN(search_tsv);

The setweight lets you rank title higher than body. ''A'' > ''B'' > ''C'' > ''D''.

Search query:

SELECT
  id,
  title,
  ts_headline('english', body, query, 'StartSel=<mark>, StopSel=</mark>') AS snippet,
  ts_rank(search_tsv, query) AS rank
FROM documents,
  plainto_tsquery('english', $1) AS query
WHERE
  workspace_id = $2  -- TENANT SCOPING (CRITICAL)
  AND deleted_at IS NULL
  AND search_tsv @@ query
ORDER BY rank DESC, created_at DESC
LIMIT 50;

Typo tolerance with trigrams:

For typo tolerance, add pg_trgm:

CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE INDEX idx_documents_title_trgm ON documents USING GIN (title gin_trgm_ops);

Then:

SELECT id, title, similarity(title, $1) AS sim
FROM documents
WHERE workspace_id = $2
  AND title % $1  -- trigram similarity match
ORDER BY sim DESC
LIMIT 10;

Combining tsvector and trigram:

A common pattern: try tsvector first; fall back to trigram if no results.

WITH tsv_results AS (
  SELECT id, title, ts_rank(...) AS rank
  FROM documents
  WHERE workspace_id = $2 AND search_tsv @@ plainto_tsquery($1)
  LIMIT 50
)
SELECT * FROM tsv_results
UNION ALL
SELECT id, title, similarity(title, $1) AS rank
FROM documents
WHERE workspace_id = $2 AND title % $1
  AND id NOT IN (SELECT id FROM tsv_results)
LIMIT 10;

Critical implementation rules:

  1. GENERATED ALWAYS columns auto-update on row writes — no triggers needed.
  2. GIN indexes are the right index type for tsvector.
  3. Always include workspace_id = $2 in WHERE clauses. Tenant scoping is non-negotiable.
  4. Use parameterized queries — never concatenate user input.
  5. plainto_tsquery not to_tsquery for user input — handles unsanitized text safely.

When Postgres FTS isn''t enough:

  • You need >50ms search on >5M rows (Postgres FTS slows down)
  • You need rich faceting + filtering + typo tolerance combined
  • You need autocomplete at <50ms with prefix matching at scale
  • You''re shipping a search-first product (e-commerce catalog, content site)

At those signals, migrate to Meilisearch / Typesense / Algolia.

Output:

  1. The schema migration with tsvector
  2. The search query with rank + headline + tenant scoping
  3. The typo-tolerance setup
  4. The performance benchmark (search time at current row count)

The single biggest win for indie SaaS: **Postgres FTS works to ~1M rows on cheap hardware.** Most products never need to migrate. The operational simplicity (no separate service, no indexer drift, no extra ops) is worth more than the marginal feature gap.

---

## 3. When You Outgrow Postgres: Meilisearch or Typesense

Beyond Postgres FTS scale, the modern OSS options are excellent. Both run as a single container; both have good typo tolerance and faceting; both have hosted options.

Help me migrate to Meilisearch (or Typesense).

The pattern:

Setup:

  • Run Meilisearch as a Docker container (single binary)
  • Or use Meilisearch Cloud
  • Configure an index per entity type (documents, users, etc.)
  • Set searchable attributes, filterable attributes, ranking rules
// Setup the index
await client.index('documents').updateSettings({
  searchableAttributes: ['title', 'body', 'tags'],
  filterableAttributes: ['workspace_id', 'author_id', 'created_at', 'category'],
  sortableAttributes: ['created_at', 'updated_at'],
  rankingRules: [
    'words',
    'typo',
    'proximity',
    'attribute',
    'sort',
    'exactness',
  ],
})

Indexing on writes:

After every create / update / delete, sync to the index:

async function syncDocumentToSearch(documentId: string) {
  const doc = await db.documents.findById(documentId)
  if (doc.deleted_at) {
    await meili.index('documents').deleteDocument(documentId)
  } else {
    await meili.index('documents').updateDocuments([{
      id: doc.id,
      workspace_id: doc.workspace_id,    // CRITICAL: tenant scope
      title: doc.title,
      body: doc.body,
      tags: doc.tags,
      author_id: doc.author_id,
      created_at: doc.created_at,
      updated_at: doc.updated_at,
    }])
  }
}

Run this:

  • Inline after the request handler completes (low latency to availability)
  • Or async via background job (decoupled but small drift)

Search query (with mandatory tenant scope):

const results = await meili.index('documents').search(query, {
  filter: `workspace_id = "${workspaceId}"`,  // ALWAYS scoped
  limit: 20,
  attributesToHighlight: ['title', 'body'],
  attributesToCrop: ['body'],
  cropLength: 100,
})

Critical implementation rules:

  1. Tenant scoping is mandatory. Every search query MUST include workspace_id as a filter. Build a wrapper that enforces this at the API boundary — don''t trust callers.
  2. Re-index on schema changes. Adding a new searchable field requires re-indexing existing data.
  3. Handle index lag. If a user creates a document and immediately searches, do they see it? Either index synchronously OR show a "just-created" UX state.
  4. Reconcile periodically. A weekly job that compares your DB to the search index catches drift.
  5. API key per environment. Meilisearch keys can be scoped to specific indexes / actions.

Migration from Postgres FTS:

  • Bulk-index existing data via batch (paginate; insert in batches of 1000)
  • Run dual-search (Postgres + Meilisearch) for a week to compare results
  • Cut over the search endpoint when confident
  • Decommission Postgres FTS columns after a month of clean operation

Don''t:

  • Use the master API key in client code (use a search-only key)
  • Skip the dual-run validation period (you''ll find ranking surprises)
  • Forget to handle deleted records (they should disappear from search)

Output:

  1. The index settings
  2. The sync function (create / update / delete handlers)
  3. The wrapped search function with mandatory tenant scoping
  4. The reconciliation job
  5. The dual-run validation plan

The biggest performance win moving off Postgres FTS: **search-as-you-type at <50ms.** Postgres FTS struggles below 100ms at scale; Meilisearch / Typesense / Algolia handle it natively. If autocomplete is part of your search UX, this matters.

---

## 4. Tenant Scoping: The Non-Negotiable

Search across tenants is a privacy disaster. Build the scope check into the search wrapper, not the route handler.

Design the tenant-scoping layer.

The pattern:

Wrap every search call in a function that enforces tenant scope:

async function searchDocuments(
  query: string,
  workspaceId: string,
  userId: string,
  options: { limit?: number; offset?: number } = {}
) {
  // Verify the user belongs to this workspace
  const member = await getWorkspaceMember(workspaceId, userId)
  if (!member) throw new Error('Not a member of this workspace')

  const results = await meili.index('documents').search(query, {
    filter: `workspace_id = "${workspaceId}"`,
    limit: options.limit ?? 20,
    offset: options.offset ?? 0,
  })

  return results
}

// Route handler:
app.get('/api/search', async (req, res) => {
  const results = await searchDocuments(
    req.query.q as string,
    req.workspaceId,
    req.user.id,
  )
  res.json(results)
})

Critical rules:

  1. NEVER expose raw search-index access to the frontend. Always go through your API. A frontend that talks directly to Meilisearch / Algolia bypasses your tenant scoping.
  2. Use scoped API keys (Meilisearch supports tenant tokens; Algolia has secured API keys). Even if a key leaks, scope limits damage.
  3. Validate workspace membership before searching. A user who is no longer a member shouldn''t get results.
  4. Audit search queries for high-value spaces (per Audit Logs, sample if needed).

Per-record permissions (RBAC layer):

Some users in a workspace have access to only some records. The search layer must respect this.

Two approaches:

Approach A: Filter at search time

  • Include viewer_ids array on each record
  • Filter: workspace_id = "X" AND viewer_ids IN ["userA"]
  • Works for small viewer sets

Approach B: Filter post-search

  • Search returns candidates
  • Application code filters to records the user can see
  • Slower but simpler; works when viewer sets are large or dynamic

Pick based on scale and perm model complexity.

Soft-delete handling:

Records that are soft-deleted (per Account Deletion, per File Uploads) shouldn''t appear in search.

  • On soft-delete: remove from index immediately
  • On purge: redundant (already gone)
  • On undelete: re-index

Don''t:

  • Trust the frontend to scope (it can be bypassed)
  • Use a single global API key for all tenants (one leak = total breach)
  • Index records without workspace_id (you''ll forget; users will see other tenants'' data)

Output:

  1. The scoped-search wrapper
  2. The tenant-token / scoped-key strategy
  3. The per-record permissions filter
  4. The audit-log integration
  5. The lint rule that fails CI if a search call doesn''t go through the wrapper

The single biggest privacy bug pattern: **an API endpoint that proxies search-index queries with user-controlled filter.** A request with `?filter=workspace_id != "mine"` returns everyone''s data. Always assemble filters server-side; never accept filter strings from the client.

---

## 5. Build Search-As-You-Type With Debouncing

Modern search UIs are interactive. Build them right.

Design the search-as-you-type UX.

The pattern:

Frontend:

function SearchInput() {
  const [query, setQuery] = useState('')
  const [results, setResults] = useState([])
  const debouncedQuery = useDebounce(query, 200)  // 200ms

  useEffect(() => {
    if (debouncedQuery.length < 2) {
      setResults([])
      return
    }
    fetch(`/api/search?q=${encodeURIComponent(debouncedQuery)}`)
      .then(r => r.json())
      .then(data => setResults(data))
  }, [debouncedQuery])

  return (
    <div>
      <input
        value={query}
        onChange={e => setQuery(e.target.value)}
        placeholder="Search..."
      />
      <SearchResults results={results} query={debouncedQuery} />
    </div>
  )
}

Critical rules:

  1. Debounce 150-300ms. Less = wasted requests; more = laggy feel.
  2. Cancel in-flight requests (use AbortController) when a new query arrives.
  3. Show loading state so users know something''s happening.
  4. Skeleton results during load — keeps the layout stable.
  5. Show "no results" with a useful suggestion (per next section).
  6. Highlight matched terms in results (Meilisearch / Algolia / Postgres ts_headline all support this).

Performance budgets:

  • Search request → server: <200ms
  • Server → search backend: <50ms
  • Server response: <100ms
  • Total: <300ms is fast; <500ms is OK; >1s is broken UX

Caching:

  • Cache identical queries client-side for 30s (avoid rapid duplicate requests)
  • Server-side cache common queries (tags, popular searches)
  • Don''t cache personalized results across users

Mobile considerations:

  • Larger touch targets
  • Cancel-on-tap-outside
  • Hide keyboard on result tap
  • Voice search if it''s relevant

Don''t:

  • Search on every keystroke without debouncing (kills the backend)
  • Show results before 2 chars (mostly noise; bad UX)
  • Block the input while waiting for results (frustrating)

Output:

  1. The SearchInput component with debouncing
  2. The result-rendering component
  3. The performance budget targets
  4. The empty / loading / error states

The single biggest perceived-performance lever: **showing skeleton results immediately, then replacing with actual results.** Even if total time is the same, perceived latency drops because the user sees something happening.

---

## 6. Design the Empty State Carefully

When search returns no results, that''s a UX moment. Get it right.

Design the empty state.

The patterns:

Empty input (user hasn''t typed yet):

  • Show recent searches (per user, last 10)
  • Show suggested searches (popular in the workspace)
  • Show "search by title, content, tags, or author"

Typed but no results:

  • "No results for ''xyz''."
  • Suggest typo correction: "Did you mean ''abc''?" (Meilisearch / Algolia provide this)
  • Show 2-3 alternative searches the user might try
  • Show a "Browse all [type]" link

Permission-blocked results:

  • "We found N results, but you don''t have access. Ask an admin for permissions."
  • Don''t reveal what those results were (privacy)

Filter-too-narrow:

  • "No results match all filters. Try removing one."
  • Show the filters that are active
  • One-click "clear all filters"

Error state:

  • "Search is temporarily unavailable. Please try again."
  • Log to error monitoring
  • Suggest a fallback (browse, contact support)

Don''t:

  • Show a blank space when results are empty
  • Show "0 results" without context or alternatives
  • Hide the search input on empty results

Output:

  1. The empty-input state
  2. The no-results state with alternatives
  3. The error fallback
  4. The filter-too-narrow handling

The single biggest user-impact change: **showing "Did you mean..." when there''s a typo correction available.** Users typo all the time; surfacing the correction is more useful than "no results."

---

## 7. Index in Near-Real-Time

Search results that are 10 minutes stale feel broken. Index quickly.

Design the indexing pipeline.

The pattern:

Inline indexing (simple, immediate):

After every create / update / delete, sync to search:

async function createDocument(input) {
  const doc = await db.documents.create(input)
  await syncDocumentToSearch(doc.id)
  return doc
}

Pro: results appear immediately Con: search-backend failure can break writes (if not handled gracefully)

Async indexing (decoupled, slight lag):

Enqueue an indexing job per change:

async function createDocument(input) {
  const doc = await db.documents.create(input)
  await queue.add('sync_to_search', { documentId: doc.id })
  return doc
}

Pro: writes don''t depend on search backend Con: 1-30s lag before results appear; users may search and not find their just-created record

Hybrid (recommended):

  • Inline indexing for the user''s own writes (immediately visible to them)
  • Async for bulk imports / background updates
  • Best of both

Bulk re-indexing (when needed):

Cases when you re-index everything:

  • Schema change (new searchable field)
  • Index settings change (new ranking rule)
  • Index corruption recovery

Pattern:

  • Create a new index version (documents_v2)
  • Bulk-index in batches (1000 records / batch)
  • Switch the alias atomically once complete (Meilisearch supports this)
  • Drop the old index

Reconciliation (catches drift):

A weekly job:

  • Sample 1% of records
  • Verify each is in the search index AND matches the DB version
  • Alert on mismatches
  • Often catches: deleted records that lingered, soft-deleted records that weren''t removed, old versions

Critical rules:

  1. Don''t fail writes on search-backend errors. Log; queue retry; keep the write succeeding.
  2. Index workspace_id every time. Tenant scoping depends on it.
  3. Handle deletes immediately. Stale "result" links to deleted records erode trust.
  4. Audit massive re-indexes so you know they ran.

Don''t:

  • Run full re-index synchronously during business hours (locks resources)
  • Skip the reconciliation job (drift is real)
  • Trust the indexer to recover from crashes without explicit retry

Output:

  1. The inline-vs-async strategy
  2. The sync function for create / update / delete
  3. The bulk re-index job
  4. The reconciliation job
  5. The error-handling and retry policy

The single most common search bug: **records visible in the UI but missing from search.** Caused by indexer failures during create flows. The reconciliation job catches these; without it, users assume your search is broken.

---

## 8. Track Search Analytics

Search is the highest-signal user behavior data you have. Track it.

Design search analytics.

Metrics to track:

  • search.query_count — total queries per period
  • search.zero_results_rate — % of queries returning nothing (high rate = product gap or indexing issue)
  • search.click_through_rate — % of queries followed by clicking a result
  • search.median_time_to_click — how long after results appeared did they click
  • search.refinement_rate — % of queries followed by a more-specific query (signal of relevance failure)
  • search.popular_queries — top 100 per period
  • search.no_results_queries — top queries that returned nothing (highest-leverage product feedback)

Per-query event (logged):

{
  "user_id": "...",
  "workspace_id": "...",
  "query": "...",
  "result_count": 12,
  "time_ms": 87,
  "result_clicked_id": "...",
  "result_position_clicked": 3,
  "session_id": "..."
}

Per PostHog Setup, capture this as a search_executed event.

Customer-facing surfaces:

For workspace admins:

  • Recent searches dashboard (sample, not all)
  • Popular content (often paired with search)
  • Zero-results report — what users searched for and didn''t find

Used to:

  • Identify content / feature gaps
  • Tune ranking
  • Spot search-intent shifts

Don''t:

  • Log raw query strings in long-term storage if they could contain PII (sample / sanitize)
  • Forget to scope per workspace (cross-tenant leakage in dashboards)
  • Build custom analytics — pipe to PostHog / Amplitude / Mixpanel

Output:

  1. The event schema for search_executed
  2. The analytics dashboard (zero-results, popular, refinement rate)
  3. The customer-facing admin reports
  4. The privacy considerations

The single highest-leverage signal in product: **the top 10 zero-results queries.** They tell you exactly what users want and you don''t have. Whether to add features, content, or fix indexing is the next decision; the queries surface the priority.

---

## 9. Tune Ranking Over Time

Out-of-the-box ranking is rarely right for your domain. Tune.

Tune ranking.

The default ranking stack (most search tools):

  1. Word match — does the result contain the words?
  2. Typo tolerance — fuzzy match if no exact match
  3. Proximity — words closer together rank higher
  4. Attribute weight — title > body > tags
  5. Recency / custom sort

Domain-specific tuning:

Common signals to add:

  • Usage popularity: more-viewed results rank higher
  • Recency boost: newer results rank higher (with decay function)
  • Author boost: results by current user rank higher (their own stuff)
  • Watched-by-user boost: results in spaces they watch
  • Quality signals: completion rate, ratings, etc.

Combine in a weighted score:

score = base_relevance * 1.0
      + log(view_count + 1) * 0.3
      + recency_decay(created_at) * 0.2
      + (author == current_user ? 0.5 : 0)

Most search backends let you store custom ranking attributes and reference them in ranking rules.

A/B testing ranking changes:

  • Pick a metric (CTR, time-to-click, refinement rate)
  • Roll out to 10% of users via feature flags (per Feature Flag Providers)
  • Compare metrics for 1-2 weeks
  • Promote or roll back

Personalization:

User-specific ranking (each user''s results are different) is powerful but complex:

  • Personal popularity (results this user has clicked before)
  • Recent activity boost (results they''ve viewed in last 24h)
  • Collaborative filtering (results others-like-them clicked)

Often not worth the complexity for indie SaaS; mid-market+ may need it.

Don''t:

  • Tune ranking without measurement (changes feel right but may regress metrics)
  • Promote "engagement" without checking task completion (clickbait optimization)
  • Overweight recency at the expense of relevance

Output:

  1. The current ranking rules
  2. The proposed signals to add
  3. The weighted formula
  4. The A/B testing plan
  5. The metric to optimize

The biggest ranking lesson: **tune the ranking; don''t change the backend.** Most "our search is bad" complaints are ranking issues, not backend issues. Tune Postgres FTS or Meilisearch ranking rules before you migrate to Algolia.

---

## 10. Quarterly Review

Search rots. Quarterly review keeps it sharp.

The quarterly review.

Performance:

  • p50 / p95 / p99 query latency
  • Index size and growth rate
  • Indexer lag (DB vs index reconciliation findings)
  • Re-index frequency / duration

Quality metrics:

  • Zero-results rate trend
  • CTR trend
  • Refinement rate trend
  • Time-to-click trend

Top zero-results queries:

  • What did users want that we didn''t deliver?
  • Indexing gap or product gap?
  • Action: add to product roadmap or fix indexing

Top searches:

  • Most-popular queries
  • Are top results good for those queries?
  • Spot ranking issues

Drift / privacy review:

  • Reconciliation job findings
  • Any cross-tenant leakage detected?
  • Any raw API keys in client code?

Output:

  • Performance snapshot
  • 3 ranking tweaks to ship
  • 1 product gap surfaced from zero-results
  • 1 backend migration trigger if approaching scale limits

---

## What "Done" Looks Like

A working search system in 2026 has:

- Postgres FTS or Meilisearch / Typesense as the backend (not Elasticsearch unless required)
- Tenant scoping enforced at the wrapper layer (lint-rule-enforced)
- Near-real-time indexing on writes (inline or async + reconciliation)
- Search-as-you-type UI with debouncing and skeleton states
- Useful empty states (typo correction, recent searches, alternatives)
- Highlighted matched terms in results
- Search analytics piped to your product-analytics tool
- Tuned ranking for your domain (with A/B testing where possible)
- Quarterly review baked into the team rhythm
- A documented migration trigger (when to move from Postgres → Meilisearch → Algolia)

The hidden cost in search isn''t the backend — it''s **the ranking quality**. A team that picks Algolia and doesn''t tune ranking gets the same complaints as a team on Postgres FTS that doesn''t tune. Backend choice matters at the limit; ranking matters every day. Invest in ranking tuning; the backend is just the database.

---

## See Also

- [Multi-Tenant Data Isolation](multi-tenancy-chat.md) — search must respect workspace boundaries
- [Roles & Permissions (RBAC)](roles-permissions-chat.md) — per-record permissions filter results
- [Audit Logs](audit-logs-chat.md) — sensitive searches logged
- [File Uploads](file-uploads-chat.md) — uploaded files often need to be searchable
- [CSV Import Flows](csv-import-chat.md) — imported data needs indexing
- [PostHog Setup](posthog-setup-chat.md) — search analytics piped here
- [Activation Funnel](activation-funnel-chat.md) — search is often an activation milestone
- [Search Providers](https://www.vibereference.com/backend-and-data/search-providers) — backend comparison: Postgres FTS / Meilisearch / Typesense / Algolia / Elasticsearch
- [Database Providers](https://www.vibereference.com/backend-and-data/database-providers) — Postgres FTS lives here
- [Feature Flag Providers](https://www.vibereference.com/devops-and-tools/feature-flag-providers) — A/B test ranking changes
- [Vector Databases](https://www.vibereference.com/backend-and-data/vector-databases) — semantic search (often hybrid with keyword)

[⬅️ Growth Overview](README.md)