VibeWeek
Home/Grow/Search Autocomplete & Typeahead: The 200ms-Or-Don't-Bother UX Surface

Search Autocomplete & Typeahead: The 200ms-Or-Don't-Bother UX Surface

⬅️ Day 6: Grow Overview

If your SaaS has any kind of search — global product search, command palette, lookup pickers, autocomplete on dropdowns — the implementation determines whether users adopt or avoid the feature. Most indie SaaS ships search that's "okay" and never iterates: 500ms latency; missing keyboard navigation; broken with screen readers; no debounce so the API gets hammered. The fix isn't more powerful search infrastructure — it's a deliberate frontend UX layer that makes search feel instant. Done well, autocomplete becomes the feature people use 50 times a day. Done badly, it's a frustration users avoid.

A working autocomplete answers: how to debounce (don't query on every keystroke), how to handle latency (loading states without flicker), how to handle keyboard (arrow keys + enter; not just mouse), how to render results (highlight matches; truncate sensibly), how to handle no-results (helpful message; not silence), how to handle errors (network drop), how to make accessible (ARIA combobox), and how to optimize for the 200ms perception threshold.

This guide is the implementation playbook for autocomplete UX. Companion to Search, API Pagination Patterns, HTTP Retry & Backoff, Form Validation UX, and Performance Optimization.

Why Autocomplete Matters

Get the failure modes clear first.

Help me understand autocomplete failure modes.

The 8 categories:

**1. No debounce**
Every keystroke fires API request. 1 user types "search query" = 12 requests in 2 seconds. Multiply by all users = backend hammered.

**2. Slow first-character query**
First keystroke triggers query for "s" — returns 100K results; user sees noise.

**3. Latency without loading state**
Network slow; UI blank for 2s; user thinks broken.

**4. Race conditions**
User types "ap" then "apple"; "ap" response arrives AFTER "apple" — overwrites with stale.

**5. No keyboard navigation**
User can mouse to results; arrow keys do nothing. Power-users frustrated.

**6. Generic "no results"**
"No results" without suggestions; user gives up.

**7. Inaccessible (no ARIA)**
Screen reader users hear: "edit text" / "list of items"; can't tell what's happening.

**8. Mobile broken**
Soft keyboard covers results; popovers don't fit.

For my product:
- Where autocomplete appears
- Worst current UX
- Volume / latency budget

Output:
1. Top failure modes
2. Risk to engagement
3. Implementation priority

The biggest unforced error: shipping autocomplete that's not fast enough. The 200ms perception threshold is real. Above 200ms total round-trip from keystroke to results-rendered, users feel the lag. Below: feels instant.

The Latency Budget

Help me think about latency.

The 200ms budget:

User types character → results appear ~200ms later = "instant"

Breakdown of where time goes:

- Debounce (you control): 80-150ms
- Network round-trip (user network): 50-200ms (varies)
- Server-side query: 5-100ms
- Render time: 5-50ms

Total: 140-500ms typical

Hitting <200ms requires:
- Aggressive debouncing (100ms typical)
- Fast server (cached or in-memory)
- Local optimization (rendering)

Options when full backend query is slow:

**1. Frontend-cache previous queries**
Same query → same results; serve from cache instantly.

**2. Edge / CDN cache**
Common queries cached at edge.

**3. Pre-fetched data in browser**
For small data sets (<5K items), ship to browser; query locally.

**4. Two-tier search**
- Local cache shows immediately
- Backend confirms / extends in background
- Update results when backend response comes (with care to not flicker)

**5. Optimize search engine**
- Algolia / Typesense / Meilisearch designed for sub-50ms
- Postgres full-text-search with proper indexes can hit 20ms
- ElasticSearch / OpenSearch with proper shard config

For my product:
- Current latency p50 / p95
- Target

Output:
1. Latency budget allocation
2. Fast-path strategy
3. Backend infrastructure

The single most-impactful infrastructure decision: Algolia / Typesense / Meilisearch for search. Sub-50ms server-side; predictable. DIY Postgres FTS works at small scale; Algolia-class scales to millions of records. See VibeReference: Search Providers.

The Debounce Pattern

Help me debounce correctly.

The classic debounce:

```typescript
import { useDebouncedCallback } from 'use-debounce';

function SearchInput() {
  const [results, setResults] = useState([]);
  const [query, setQuery] = useState('');
  
  const search = useDebouncedCallback(async (q: string) => {
    if (q.length < 2) {
      setResults([]);
      return;
    }
    const res = await fetch(`/api/search?q=${encodeURIComponent(q)}`);
    const data = await res.json();
    setResults(data);
  }, 100);
  
  return (
    <input
      value={query}
      onChange={(e) => {
        setQuery(e.target.value);
        search(e.target.value);
      }}
    />
  );
}

The debounce timing:

  • 50ms: aggressive; many requests; for ultra-fast backends
  • 100-150ms: balanced; recommended default
  • 200-300ms: conservative; saves backend; user feels lag
  • 500ms+: feels sluggish

100ms is the sweet spot for most apps.

Length-gated:

Don't query on 1-character input. Min 2-3 characters typical.

if (q.length < 2) {
  setResults([]); // Or show recent / popular
  return;
}

Exception: command-palette style ("Cmd+K") — 1 char OK because results are commands not data.

Race condition handling:

User types "ap" then quickly "apple". "ap" request might arrive AFTER "apple". Without handling: stale "ap" results overwrite "apple" results.

const requestIdRef = useRef(0);

const search = useDebouncedCallback(async (q: string) => {
  const myRequestId = ++requestIdRef.current;
  
  const res = await fetch(`/api/search?q=${q}`);
  const data = await res.json();
  
  // Only update if this is still the latest request
  if (myRequestId === requestIdRef.current) {
    setResults(data);
  }
}, 100);

Or use AbortController:

const abortControllerRef = useRef<AbortController>();

const search = useDebouncedCallback(async (q: string) => {
  abortControllerRef.current?.abort();
  abortControllerRef.current = new AbortController();
  
  try {
    const res = await fetch(`/api/search?q=${q}`, {
      signal: abortControllerRef.current.signal,
    });
    const data = await res.json();
    setResults(data);
  } catch (e) {
    if ((e as Error).name === 'AbortError') return; // Old request
    throw e;
  }
}, 100);

For my code:

  • Library availability
  • Backend latency

Output:

  1. Debounce setup
  2. Race-condition guard
  3. Min-length rules

The bug that surfaces in production: **race conditions without guards**. Looks fine in dev (fast network); breaks in prod where 5% of users are on slow networks. Always guard.

## Loading States Without Flicker

Help me handle loading.

The principle: show something is happening, but don't FLICKER (loading → results → loading → results) on every keystroke.

The states:

- empty (no query)
- loading (query in flight; previous results may show)
- results (data available)
- empty-results (query but no matches)
- error (network / server issue)

The trick: keep showing previous results while loading; show subtle indicator.

function SearchDropdown({ results, isLoading, query }) {
  if (!query) return null; // Empty
  
  return (
    <div className="dropdown">
      {isLoading && (
        <div className="loading-indicator">
          <Spinner /> Searching...
        </div>
      )}
      
      {results.length > 0 && (
        <ul>
          {results.map(r => <ResultItem key={r.id} item={r} />)}
        </ul>
      )}
      
      {!isLoading && results.length === 0 && (
        <div className="empty">
          No results for "{query}". 
          <a href="/search?q={query}">Try advanced search →</a>
        </div>
      )}
    </div>
  );
}

The flicker fix:

Naive: if (isLoading) return <Loading/>; return <Results/> → flickers. Better: keep previous results visible + small spinner overlay.

The "stale" indicator:

When results are shown but stale (newer query in flight), de-emphasize:

.results.stale {
  opacity: 0.7;
}

User sees previous results muted; new ones replace smoothly.

Skeleton vs spinner:

  • First load: skeleton (blank shapes; signals structure)
  • Re-search: spinner (subtle; preserves last results)

For my UI: [audit]

Output:

  1. State machine
  2. UI per state
  3. Anti-flicker

The single most-impactful UX detail: **don't clear previous results during reload**. Users keep their context; the lag is invisible if you keep the previous view + subtle loading hint.

## Keyboard Navigation: The Accessibility-Critical Layer

Help me wire keyboard nav.

The required keys:

  • ↓ / ↑: navigate results
  • Enter: select highlighted result
  • Esc: close dropdown
  • Tab: close dropdown (let user tab away)
  • Cmd/Ctrl+K (or /): open search globally

Implementation:

function useKeyboardNav(results: Item[], onSelect: (item: Item) => void) {
  const [highlighted, setHighlighted] = useState(0);
  
  useEffect(() => {
    function handleKey(e: KeyboardEvent) {
      switch (e.key) {
        case 'ArrowDown':
          e.preventDefault();
          setHighlighted(i => Math.min(i + 1, results.length - 1));
          break;
        case 'ArrowUp':
          e.preventDefault();
          setHighlighted(i => Math.max(i - 1, 0));
          break;
        case 'Enter':
          e.preventDefault();
          onSelect(results[highlighted]);
          break;
        case 'Escape':
          e.preventDefault();
          // Close dropdown
          break;
      }
    }
    
    window.addEventListener('keydown', handleKey);
    return () => window.removeEventListener('keydown', handleKey);
  }, [results, highlighted, onSelect]);
  
  return highlighted;
}

Reset highlighted on new results:

useEffect(() => {
  setHighlighted(0); // Reset to first item on new results
}, [results]);

Visual highlight:

<li className={index === highlighted ? 'highlighted' : ''}>
  {item.name}
</li>
.highlighted {
  background: #e8f0fe;
  outline: 2px solid #4285f4;
}

Scroll-into-view:

Long lists: highlighted item should scroll into view.

useEffect(() => {
  const element = document.querySelector(`[data-index="${highlighted}"]`);
  element?.scrollIntoView({ block: 'nearest' });
}, [highlighted]);

For my UI: [keyboard support]

Output:

  1. Key handler
  2. Reset logic
  3. Scroll-into-view
  4. Visual feedback

The accessibility win: **arrow-keys and enter just work**. Power users (and screen-reader users) navigate by keyboard. Mouse-only autocomplete is broken for them.

## Highlighting Matched Substrings

Help me highlight matches.

The pattern: bold the matched part of the result.

User searches "app" → "Apple iPhone" shows with "Apple iPhone."

function highlightMatch(text: string, query: string): JSX.Element {
  if (!query) return <>{text}</>;
  
  const regex = new RegExp(`(${escapeRegex(query)})`, 'gi');
  const parts = text.split(regex);
  
  return (
    <>
      {parts.map((part, i) => 
        regex.test(part) 
          ? <mark key={i}>{part}</mark> 
          : <span key={i}>{part}</span>
      )}
    </>
  );
}

function escapeRegex(s: string): string {
  return s.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
}

CSS:

mark {
  background: yellow;
  font-weight: bold;
  padding: 0 2px;
}

Multi-word queries:

Highlight each word:

const words = query.split(/\s+/).filter(Boolean);
const regex = new RegExp(`(${words.map(escapeRegex).join('|')})`, 'gi');
// ... same logic

Fuzzy matching:

For approximate matches, libraries like fuse.js return match positions:

const fuse = new Fuse(items, { includeMatches: true, threshold: 0.4 });
const results = fuse.search(query);

results.forEach(r => {
  // r.matches has positions of matched characters
  // Highlight those specific characters
});

For my UI: [matching strategy]

Output:

  1. Highlight function
  2. Multi-word handling
  3. Fuzzy support if needed

The polish detail: **highlighting matched substrings**. Users instantly see WHY something matched. Without: "did this match because of the title or description?" With: instantly clear.

## ARIA: Make Screen Readers Work

Help me make autocomplete accessible.

The WAI-ARIA combobox pattern:

<div class="combobox" role="combobox" aria-expanded="true" aria-haspopup="listbox" aria-owns="results">
  <input
    type="text"
    role="searchbox"
    aria-autocomplete="list"
    aria-controls="results"
    aria-activedescendant="result-0"
  />
  
  <ul id="results" role="listbox">
    <li id="result-0" role="option" aria-selected="true">Apple</li>
    <li id="result-1" role="option">Apricot</li>
  </ul>
</div>

Key attributes:

  • role="combobox" on container
  • aria-expanded="true" when dropdown shown
  • aria-haspopup="listbox"
  • aria-controls="results" linking input to listbox
  • role="listbox" on dropdown
  • role="option" on each item
  • aria-selected="true" on highlighted
  • aria-activedescendant="result-0" to indicate which option is "active" (highlighted but not selected)

Live region announcements:

Tell screen readers about state changes:

<div role="status" aria-live="polite" class="sr-only">
  {results.length} results found.
</div>

When new results arrive: screen reader announces "5 results found."

Visually hidden but readable:

.sr-only {
  position: absolute;
  width: 1px;
  height: 1px;
  padding: 0;
  margin: -1px;
  overflow: hidden;
  clip: rect(0, 0, 0, 0);
  white-space: nowrap;
  border: 0;
}

Test with screen reader:

  • macOS: VoiceOver (Cmd+F5)
  • Windows: NVDA (free download)
  • Browser: Chrome built-in screen reader (extension)

For my code: [audit]

Output:

  1. ARIA additions
  2. Live region
  3. Test plan

The discipline: **use a tested combobox library if possible**. Headless UI, Radix UI, Reach UI all ship combobox primitives with ARIA correct. Don't reinvent — accessibility nuances are easy to miss.

## Library Choices for Autocomplete

Help me pick a library.

The 2026 landscape:

Headless UI primitives (build your own UI):

  • Radix UI Combobox — modern; well-supported; primitive
  • Headless UI Combobox (Tailwind Labs) — used with Tailwind
  • React Aria (Adobe) — accessibility-first; primitive
  • Downshift — long-standing; very flexible

Full-component libraries:

  • react-select — most-popular; many features; can be heavy
  • CMDK (cmdk-react) — command-palette focused; modern
  • MUI Autocomplete — Material UI ecosystem
  • Mantine Autocomplete — Mantine ecosystem

Specialty:

  • Algolia InstantSearch — when using Algolia backend; pre-built widgets
  • Typesense React — for Typesense
  • Meilisearch InstantMeiliSearch — for Meilisearch

Search backends (paired with frontend):

  • Algolia — fastest; commercial
  • Typesense — OSS; modern
  • Meilisearch — OSS; modern
  • Postgres FTS — DIY; works at small scale
  • Elasticsearch / OpenSearch — enterprise

For my stack: [pick]

Output:

  1. Frontend library
  2. Backend pairing
  3. Tradeoffs

The 2026 default for most: **Radix UI Combobox + Algolia/Typesense backend**. Headless primitive with Algolia / Typesense for search-as-you-type. Saves weeks; ships fast.

## Common Autocomplete Mistakes

Help me avoid mistakes.

The 10 mistakes:

1. No debounce Server hammered; rate-limited.

2. Querying on first character Returns too much; users see noise.

3. No race-condition guard Stale results overwrite fresh.

4. Flickering loading state Annoying; feels broken.

5. No keyboard navigation Mouse-only; power users hate.

6. No ARIA combobox Screen readers locked out.

7. Generic "no results" Users give up; offer fallback.

8. Highlighting only by exact-match regex (no fuzzy) Misses partial / typo matches.

9. Mobile broken Soft keyboard covers; popover overflows.

10. No fallback when backend fails Network error = blank dropdown.

For my code: [risks]

Output:

  1. Top 3 risks
  2. Mitigations
  3. Tests

The single biggest win for autocomplete UX: **<200ms perceived latency**. Hit that bar and the feature becomes invisible-good. Miss and it stays in the "ehh, search bar" category.

## What Done Looks Like

A working autocomplete delivers:
- 100ms debounce; min-length 2 chars
- Race-condition guards (request-id or AbortController)
- <200ms p50 perceived latency
- Loading without flicker (preserve previous results)
- Keyboard navigation (↑↓ Enter Esc)
- ARIA combobox correctly wired
- Match highlighting (bold the matched substring)
- Helpful "no results" message with fallback
- Mobile-friendly (popover fits; soft-keyboard handled)
- Error state for network failures
- Backend optimized (Algolia / Typesense / Meilisearch / indexed Postgres)

The proof you got it right: a power user doing keyboard-only search lands on the right result in 3 seconds; a screen-reader user gets announcements as results arrive; a slow-network user sees previous results during reload (not blank).

## See Also

- [Search](search-chat.md) — broader search infrastructure
- [API Pagination Patterns](api-pagination-patterns-chat.md) — paginating large result sets
- [HTTP Retry & Backoff](http-retry-backoff-chat.md) — retry logic
- [Form Validation UX](form-validation-ux-chat.md) — companion frontend UX
- [Performance Optimization](performance-optimization-chat.md) — latency optimization
- [Database Indexing Strategy](database-indexing-strategy-chat.md) — Postgres FTS indexes
- [VibeReference: Search Providers](https://vibereference.dev/backend-and-data/search-providers) — Algolia / Typesense / Meilisearch
- [VibeReference: Accessibility](https://vibereference.dev/product-and-design/accessibility) — broader a11y context
- [VibeReference: React](https://vibereference.dev/frontend/react) — React patterns
- [VibeReference: Shadcn](https://vibereference.dev/frontend/shadcn) — pre-built combobox via Radix