Internationalization (i18n) for Indie SaaS: Ship to More Markets Without Drowning in Translation Debt
i18n for Your New SaaS
Goal: Ship internationalization deliberately, in the right order, to the right languages — without poisoning the codebase with hardcoded strings, fragmenting your translation pipeline across 6 tools, or shipping LLM-translated copy that reads as obviously machine-generated to native speakers. Open new markets when the data justifies it; not before.
Process: Follow this chat pattern with your AI coding tool such as Claude or v0.app. Pay attention to the notes in [brackets] and replace the bracketed text with your own content.
Timeframe: i18n architecture decision in 1 day. First language shipped in 1-2 weeks. Quality review and locale-specific cohort tracking by week 4. Subsequent languages added in 2-3 days each once the system is in place.
Why Most Founder i18n Attempts Go Sideways
Three failure modes hit indie founders the same way:
- The "we'll translate later" plan turns into a 6-month rewrite. The founder hardcodes English strings throughout the product on day 1. By month 12, "internationalization" is a multi-week refactor across 800 components. The translation work itself is small; the extraction and key-management work is what takes 6 weeks.
- The "AI will just translate it" plan ships obvious tells. Founder runs the entire product through GPT-4 or Claude, ships German on day 2. Native speakers spot the AI-isms within 5 minutes — over-formal register, awkward word order, brand-tone misses, missing local conventions ("Sehr geehrter Kunde" formality where casual is normal). Conversion in the new market is poor; the team blames the translation, not the strategy.
- Translating 12 languages on day 1. Founder reads "global from day 1" advice, ships German + French + Spanish + Italian + Portuguese + Japanese + Korean + Chinese + Arabic + Hindi + Polish + Dutch on launch. None gets meaningful traction; the translation maintenance burden is now permanent across 12 locales the team has no signal to invest in.
The version that works is structured: build the i18n infrastructure correctly from week 1 (so adding languages is cheap), pick languages based on real customer-data signal (not vanity), use AI for the first draft and human review for the brand-critical surfaces, and measure per-locale conversion to decide which languages earn ongoing investment.
This guide assumes you have already done Customer Discovery Interviews (you should know which markets your customers actually come from), have shipped a Brand Voice document (translation must respect it), and have PostHog Setup (you'll segment conversion by locale).
When You Should and Shouldn't Internationalize
Internationalize when:
- 15%+ of your traffic comes from a single non-English market (signal that the audience is there)
- Multiple support requests in a non-English language (or in broken English from clearly non-native speakers)
- A specific paying customer or partner asks for it
- You're targeting a market where local-language UX is table stakes (Japan, Germany, France, Korea, Brazil)
- Your category genuinely doesn't translate ("vibe coding" needs no translation; "tax management" needs every locale)
Skip i18n when:
- All traffic is English-language audiences in non-English-speaking countries (Nordic countries, Netherlands, India tech) where English is the working language
- You're pre-PMF; activate the English market first
- Your primary buyer is technical (developers, infra teams) and works in English regardless of country
- You don't have native-speaker access for at least review (a wrong-tone translation is worse than English)
The biggest mistake: translating because you "should" without data. The second-biggest: not translating because it "feels premature" when the data clearly supports it.
1. Wire Up the i18n Infrastructure on Day 1 (Even If You Only Ship English)
The architectural decision matters more than the translation. Get this right while the codebase is small.
You're helping me design the i18n architecture for [your product] at [your-domain.com] built with [Next.js / SvelteKit / your framework].
The architecture decisions:
1. **Routing pattern** — three options:
- Subpath: yourdomain.com/de/, yourdomain.com/fr/, etc.
- Subdomain: de.yourdomain.com, fr.yourdomain.com
- Domain: yourdomain.de, yourdomain.fr (separate ccTLDs)
- For most indie SaaS in 2026: subpath. Same domain authority, simpler DNS, easier to test.
2. **Library / framework choice** for [your stack]:
- **Next.js (App Router)**: next-intl is the de-facto default in 2026. Solid TypeScript, supports server components, locale-aware routing.
- **Next.js (Pages Router)**: built-in i18n + react-i18next or react-intl
- **SvelteKit**: svelte-i18n or paraglide (newer, type-safe)
- **Astro**: native i18n routing + your translation library of choice
- **Remix**: remix-i18next
- **Mobile (React Native)**: i18next + react-i18next + @formatjs/intl
3. **Translation file format**: pick ONE
- **JSON** (most common, broad tooling support)
- **YAML** (more readable for translators, less common in JS ecosystem)
- **PO/POT** (gettext-style, mature but verbose)
- For most indie SaaS in 2026: JSON, namespaced by surface.
4. **Key naming convention**:
- Hierarchical: `pricing.tier.pro.title` rather than flat `pricingTierProTitle`
- Source language as the source of truth (English keys → translations to other languages, NOT German keys translated to English)
- Avoid embedding the English text in the key (so changing the English copy doesn't require renaming the key)
5. **Pluralization handling**:
- ICU MessageFormat (industry standard, supports complex plural rules across languages)
- Examples: `{count, plural, one {# item} other {# items}}` — this expands correctly to Russian (3 plural forms), Polish, Arabic, etc.
- Don't roll your own; use the library's plural-rules implementation.
6. **Date / number / currency formatting**:
- Use `Intl.DateTimeFormat` and `Intl.NumberFormat` (built into the browser/Node)
- Don't hardcode formats; pass the user's locale.
Output:
1. The recommended library + file format for my stack
2. The folder structure for translations: locales/[lang]/[namespace].json
3. The translation function signature: t('namespace.key', { interpolation })
4. The locale detection logic: cookie / browser-Accept-Language / user setting / URL — in that order
5. The fallback strategy: missing key in target language → fall back to source language → log to a missing-translation telemetry
6. The build-time validation: every key in source language exists in every locale (or fail the build)
Generate the wiring code for [your framework].
Three principles I've watched founders re-learn:
- Build the infrastructure even if you ship only English. Adding internationalization to a 12-month-old product is a quarter-long project. Adding it to a 1-week-old product is a 1-day decision. The cost gap is enormous.
- Source language is the source of truth. Do not "translate from German" because German happens to be the founder's language. English keys, English source values, machine-translatable to others.
- Build-time validation is non-negotiable. Missing translations should fail the build, not silently fall back. Otherwise the team ships a product that's 80% German with random English strings sprinkled through.
2. Decide What's Translatable and What Isn't
Not everything in your product should be translated. Make the decision deliberately.
For [your product], help me categorize translatable vs non-translatable surfaces.
Always translate:
- Marketing site (homepage, pricing, about, blog post titles)
- Product UI labels (buttons, navigation, form labels, error messages)
- Onboarding emails and lifecycle email sequences
- Critical product copy (empty states, success messages, validation errors)
- Legal pages (privacy policy, terms — though local legal review is required)
- Help / docs / KB articles for the most-used 20 articles
Translate selectively:
- Long-form blog content (translate the top 10 articles by traffic; not the entire blog)
- Customer-facing emails (welcome, onboarding) translate; transactional receipts translate; internal-team notifications stay in English
- Demo videos / product video — start with subtitles, add native voice-over only at scale
Don't translate (typically):
- Internal admin / support tools
- Brand name / product name
- Specific industry jargon that doesn't have a clean translation (often loanwords work)
- Code identifiers, API endpoint names, technical reference materials
- User-generated content
- Beta / experimental features (avoids re-translation costs as features stabilize)
- Status page (English is OK for technical incident comms, given the audience)
Output:
1. A category breakdown for my product's surfaces
2. The "translation priority tier" for each: P0 (translate before launch), P1 (within 30 days), P2 (selective), P3 (skip)
3. A specific translation budget: roughly how many words at P0, P1, P2 — to estimate cost and timeline
Two principles:
- Translating less, well, beats translating everything, badly. A localized homepage + onboarding email + top 10 docs in perfect German converts better than the entire site machine-translated.
- The marketing site is more important than the product UI. Product UI is repeated short strings (Save, Cancel, Settings); the marketing site is where the conversion happens. Spend translation budget there first.
3. Pick Languages Based on Data, Not Aspiration
Most founders pick languages by gut. Pick by data.
Help me pick which languages to ship first.
Run the data analysis on [your traffic + your customer base]:
Signals to look for:
1. **Existing traffic**: which countries / language audiences already visit the site?
- Pull Google Analytics or Vercel Analytics: top 10 countries by sessions
- Cross-reference with primary language of each country
- Look for: 15%+ of traffic from a single non-English market
2. **Existing customers**: which countries do paying customers come from?
- Stripe customer data — billing country
- Look for: any non-English country with 5+ customers (statistically meaningful signal at indie scale)
3. **Existing demand signals**:
- Support requests in non-English languages
- "When will this be available in [language]?" feedback
- Twitter / Reddit / community mentions in non-English
4. **Market dynamics**: which markets are local-language-mandatory?
- Japan: yes (English-only sites have <5% conversion in JP B2B)
- Germany: high preference for German (especially mid-market and enterprise)
- France: official preference for French; English tolerated for technical SaaS
- Brazil: Portuguese strongly preferred
- Korea: Korean strongly preferred
- Spain / LatAm: Spanish helps; English tolerated
- Nordic / Netherlands: English fine, no urgency
- India: English typical for tech buyers
Standard expansion order for B2B SaaS in 2026 (after English):
1. German (large market, high willingness to pay, mid-market reachable)
2. French (similar)
3. Japanese (large market, but high translation-quality bar — only do this if you can afford native review)
4. Spanish (US Hispanic + Spain + LatAm — broad audience)
5. Brazilian Portuguese (separate from European Portuguese; specifically Brazil)
6. Korean (high-quality bar like Japanese)
7. Chinese (Simplified for Mainland; complications around local hosting)
8. Italian, Polish, Dutch — more niche
Output:
1. The data-driven ranking of next 3 languages for my product
2. The traffic / customer / demand signal that supports each
3. The expected revenue contribution at year 1 if the localization works (rough estimate from the customer-base extrapolation)
4. The recommendation: start with 1 language, ship it well, measure conversion lift, then add the next
The single most useful constraint: ship one language at a time and measure conversion lift before adding the next. Founders who ship 6 languages on day 1 have no data on which ones are working. Sequential rollout produces clear cohort data that informs the next-language decision.
4. Use AI for First Draft, Humans for Brand-Critical Surfaces
The 2026 reality: AI handles 80% of translation work; humans handle the 20% that determines whether the brand reads as native or off.
Design the translation pipeline using AI + selective human review.
The flow:
**Step 1: AI draft translation**
- Use Claude / GPT-4 / DeepL Pro / Google Translate API
- Pass the source language file with brand voice context
- For each key, include:
- The source-language string
- Surrounding context (the surface this appears on, who reads it)
- Tone constraints (formal/casual register, brand voice rules)
- Term glossary (consistent translation of brand-specific terms)
- Output: machine translation with confidence indicators
**Step 2: Tier the strings by review priority**
P0 — must have native human review:
- Marketing homepage hero, value prop, CTA buttons
- Pricing page tier descriptions
- Onboarding email subjects and openers
- Email sequence copy
- Brand-tone-critical surfaces (about page, founder story)
P1 — selective human review:
- Long-form blog content
- Help docs
- Detailed product feature descriptions
P2 — AI-only acceptable:
- UI button labels (Save, Cancel, Submit)
- Error messages (translate carefully but pattern-matched, less brand-tone risk)
- Form field labels
- Empty states with no brand voice load
**Step 3: Human review for P0 + P1**
- Hire a native-speaker reviewer (not just any translator — someone fluent in your category vocabulary)
- Pay rate: $0.05-$0.15 per word in 2026 for review (vs $0.10-$0.30 for full translation)
- Use a translation management tool (Phrase, Lokalise, Crowdin) to manage the workflow
- Provide the brand voice doc, glossary, and a few examples of correct vs incorrect tone
**Step 4: Native speaker QA after deployment**
- Have a native speaker walk through the live product in their language
- Capture awkward strings as bug tickets
- Iterate over the first 30 days of deployment
Output:
1. The AI-prompt template for first-draft translation
2. The review tier assignment for each translatable surface
3. The translation management tool recommendation: Phrase / Lokalise / Crowdin / Tolgee / Localizely
4. The hiring spec for a native-speaker reviewer (specific category fluency, fluent in source language, available 5-10 hours per language launch)
5. The QA checklist for post-deployment review
Three rules that prevent the worst outcomes:
- Never ship pure-AI translation on brand-critical surfaces. The marketing homepage in machine-translated German tells your German prospects you don't take them seriously. Pay for the review; the conversion ROI is real.
- The reviewer needs category fluency, not just language fluency. A native German speaker without B2B SaaS background will translate "trial" as "Probe" (sample, like a science experiment) instead of "Testphase" (the conventional B2B SaaS term). Hire for the right vocabulary.
- Build a glossary on day 1. The 50-100 brand-specific terms that should always be translated the same way. Without it, every reviewer makes different choices, and consistency rots.
5. Handle Locale-Specific UX Beyond Strings
Translation is the visible 60%; the invisible 40% is everything else.
For each locale I ship, audit the locale-specific UX:
**Date / time / number formatting:**
- 12/31/2026 (US) vs 31/12/2026 (most of Europe) vs 2026-12-31 (Japan, ISO)
- 1,234.56 (US) vs 1.234,56 (Germany) vs 1 234,56 (France)
- Always use Intl.DateTimeFormat / Intl.NumberFormat with the user's locale
**Currency:**
- $99 (US) → €99 (EU) — but is the German price the same dollar amount or local market-priced?
- For most indie SaaS in 2026: keep USD as the billing currency, optionally show local-currency display via API rate conversion (Stripe handles this for charging; the display is up to you)
- Mid-market and enterprise customers expect to see prices in their local currency from the start
**Phone numbers, addresses, postal codes:**
- Validate against locale-specific patterns
- Use libphonenumber for phone validation
- Postal codes vary wildly (numeric in US/DE; alphanumeric in UK/CA; not used at all in some places)
**Right-to-left (RTL) languages:**
- Arabic, Hebrew, Persian, Urdu
- Don't ship these without RTL CSS audit
- Logical CSS properties (margin-inline-start vs margin-left) make this much easier
- If you ship RTL, test the entire product, not just the strings — buttons, modals, dropdowns all flip
**Language-specific content quirks:**
- German: long compound words break narrow UI columns
- Japanese: characters are wider; narrow columns overflow
- Chinese: text-rendering challenges (fonts, line height)
- Spanish/Portuguese: ~30% longer than English on average — buttons may overflow
- Test with the longest-string locale, not just English
**Locale-specific functionality:**
- VAT collection in the EU (per [Payment Providers](https://www.vibereference.com/auth-and-payments/payment-providers))
- Cookie consent (GDPR for EU; LGPD for Brazil; PIPL for China)
- Specific legal disclaimers per market
- SMS provider differences (some providers don't reliably deliver to certain countries)
- Email-deliverability differences (Microsoft/Outlook is a much bigger share in some EU markets)
Output:
1. The locale-specific audit checklist for the next language launch
2. The list of components that need responsive UI testing for longer translated strings
3. The legal/compliance to-dos per market (consult a lawyer; don't DIY)
4. The dev tasks vs design tasks vs legal tasks
The single most overlooked locale UX surface: string length. German is ~30% longer than English; Russian similar; Japanese fits in less horizontal space but more vertical. A button that says "Get Started" in English and "Jetzt kostenlos loslegen" in German will overflow if you didn't design for it. Test with the longest-string locale early.
6. Measure Per-Locale Conversion
You shipped a language. Did it work? Measure.
Build the per-locale conversion dashboard.
For each locale, track separately:
1. **Traffic**: visitors per locale
2. **Signup conversion rate**: visitor → signup, by locale
3. **Activation rate**: signup → activation, by locale
4. **Trial-to-paid conversion**: by locale
5. **Customer acquisition cost (CAC)**: paid + organic, by locale
6. **Customer lifetime value (LTV)**: by locale (takes 6+ months to be meaningful)
Cohort comparison:
- Compare each non-English locale's funnel to English
- Healthy benchmark: localized locales should have signup conversion within 80% of English (sometimes higher if you've nailed local-market fit)
- If signup conversion is <50% of English: the localization is broken (wrong language register, wrong cultural framing, wrong pricing presentation)
- If activation rate is much lower in a locale: product UX issue specific to that locale (string overflow, wrong date format, missing local payment method)
Quarterly review per locale:
- Total revenue from the locale
- Number of customers
- Cost of maintaining the locale (translation maintenance, native-speaker review hours, support load if customers ask in the language)
- Verdict: invest more, hold steady, or sunset
The "sunset a locale" criteria (rare but important):
- Less than 3 paying customers after 6 months
- Translation maintenance cost exceeds revenue contribution
- Support tickets in that language overwhelm the team's capacity
- Sunset gracefully: 90-day notice to existing customers, redirect to English-language site, refund any annual that overlaps the cutoff
Output:
1. The PostHog dashboard configuration with per-locale segments
2. The funnel comparison view (English vs each non-English)
3. The quarterly review template
4. The sunset criteria and process
The metric most teams skip: per-locale activation rate. Signup might convert fine, but if German signups never activate at the same rate, something in the product (not just the marketing) is failing in German. Signup is acquisition; activation is product fit.
7. Localize Customer Support and Communication
A translated marketing site is a great front door. A monolingual support team kills the relationship after a customer signs up.
Plan the localized support and lifecycle communication.
Translation surfaces:
1. **Onboarding emails** — translate the entire sequence (per [Onboarding Email Sequence](onboarding-email-sequence-chat.md))
2. **Lifecycle emails** — drip, re-engagement, win-back (per [Reduce Churn](reduce-churn-chat.md))
3. **Transactional emails** — receipts, password resets, security alerts
4. **In-product notifications** — banners, toast messages, system messages
5. **Help center / KB** — top 20 articles minimum
6. **Public-facing changelog** — the headlines and major release notes
Support staffing:
Three options:
- **English-only support, with translation tools**: AI translation in the support tool (Plain, Intercom, etc. all support this in 2026). Set expectation: replies may take longer; we use translation tools.
- **Native-speaker freelance support**: contract a native speaker for 5-10 hours/week of triage in the language. Cost: $1-3K/month per language at indie scale.
- **Hire a regional CS rep**: appropriate at $1M+ ARR per locale; not before.
For most indie SaaS in 2026 launching their first non-English locale: English-only with AI translation, transparent about the gap, with native-speaker spot-checks for high-tier customers.
Setup:
- Auto-detect customer language from email / browser / account setting
- Route incoming support to the right queue if you have native-speaker staff
- Set response-time expectations clearly per language
- Native-speaker reviewer audits 10% of replies in the new language for accuracy
Output:
1. The translation surfaces for lifecycle email
2. The support staffing recommendation for [your stage]
3. The customer-facing language-coverage page (set expectations: which surfaces are in their language, which aren't yet)
4. The response-time SLA per language
The single most damaging gap: a customer signs up after seeing the localized marketing site, then receives a welcome email in English. The promise of localization is broken on day 1. Translate the lifecycle emails, not just the marketing site.
8. Maintain Translations as Code Evolves
Translation is not a one-time project. The product changes; translations must keep pace.
Set up the ongoing translation maintenance workflow.
The recurring questions:
- A new feature ships in English. When does it become available in other languages?
- A copy change in English. Does it propagate to translations?
- A new locale is added. How do we batch-translate the existing strings?
The workflow:
1. **String extraction is automatic.** Any new key added to source files is detected by the build (via [Translation Management Platform] or via a CI check). The CI fails if keys are added without translations OR auto-creates "needs translation" tickets.
2. **Translations lag English by 1-2 weeks.** Set the expectation explicitly. Ship in English; translate within 2 weeks. The "untranslated" state shows source-language fallback with a small "[untranslated]" indicator (or no indicator — both are acceptable).
3. **Use a translation management tool (TMP)**:
- **Phrase / Lokalise / Crowdin**: established, ~$50-300/mo
- **Tolgee**: open-source / cloud option, $0-$80/mo
- **Localizely**: indie-friendly, $25/mo
- The TMP holds source + translations, lets reviewers edit, exports to your codebase via CLI / GitHub Action
4. **CI integration**:
- GitHub Action: on PR, check that all source-language keys exist in all target locales
- Auto-create translation tickets in the TMP for new keys
- Block deploy if P0 surfaces have untranslated content
5. **Quarterly review**:
- Audit the worst-performing locale's funnel
- Spot-check 20-30 translated strings for quality drift
- Update the glossary if new terms have emerged
- Re-review brand-critical surfaces if voice has evolved
6. **Glossary maintenance**: keep the brand-term dictionary in version control. Every reviewer signs off on it. Every locale uses the same canonical translation for brand-specific terms.
Output:
1. The TMP recommendation for [your scale]
2. The CI workflow YAML
3. The quarterly review template
4. The glossary template (English term → German / French / Spanish / etc., with usage notes)
The most frequent failure mode at year 2: the translation tooling decays. New features ship without translation; the team is "going to handle it later"; later never comes. The English-locked product slowly diverges from the localized versions until they're effectively different products. Build the discipline once; sustain it forever.
What Done Looks Like
By end of week 4 of internationalizing the first non-English locale:
- i18n infrastructure wired into the codebase (even if you only ship 1 language)
- First non-English language live for marketing site + product UI + onboarding emails
- Native-speaker reviewer engaged for P0 surfaces
- Per-locale conversion dashboard showing English vs new locale
- Translation management tool picked and configured
- Glossary v1 with 50-100 brand terms
Within 90 days:
- 1-3 non-English languages live
- Per-locale conversion data showing where each locale lands relative to English
- A clear answer to "is this language working" (yes / no / needs more time)
- One quarterly maintenance review completed
Within 12 months:
- 3-6 active non-English locales (only those that earned the investment)
- Per-locale revenue contribution measurable
- One locale sunset (you'll learn at least one is not working) — and it'll feel fine, because the data justified the decision
Common Pitfalls
- Hardcoded strings in components. The single most expensive technical debt at year 2. Wire i18n into the codebase from day 1 even if you only ship English.
- Translating before measuring traffic. The data tells you which markets to invest in. Translation without data is vanity localization.
- Pure-AI translation on brand-critical surfaces. The marketing homepage in machine-translated German signals "we don't take you seriously." Pay for the review.
- Shipping 6 languages on day 1. Sequential rollout produces actionable conversion data; parallel rollout produces noise.
- No glossary. Every reviewer makes different choices; consistency rots.
- Forgetting locale UX beyond strings. Date formats, number formats, currency, RTL, string length, local payment methods.
- English-only customer support after localized marketing. Promise broken on day 1.
- No per-locale conversion measurement. You can't tell if the localization is working.
- No string-extraction CI check. New features ship untranslated; the team forgets; English-locked features accumulate.
- Treating translation as a one-time project. It's permanent infrastructure with permanent maintenance.
Where i18n Plugs Into the Rest of the Stack
- Brand Voice — must be respected per locale; the glossary derives from it
- Customer Discovery Interviews — informs which markets have demand
- Onboarding Email Sequence — must localize alongside marketing
- Email Deliverability — locale-specific deliverability differences (Microsoft share larger in some EU markets)
- Customer Support — staffing decisions per language
- PostHog Setup — segments traffic and conversion by locale
- Pricing Page — currency and pricing-display per locale
- Payment Providers — VAT / GST / local-tax handling
- Data Trust — GDPR, LGPD, PIPL compliance pages per market
- Headless CMS — multi-locale content storage
- Reduce Churn — save sequences localized
What's Next
i18n is one of those investments that compounds quietly. The team that ships it correctly in week 1 (infrastructure) and one language at a time thereafter (sequential rollout) builds a global business at indie pace. The team that defers it pays the retrofit tax in year 2, ships sloppy machine-translated locales, and spends the next year cleaning up.
Build the discipline now. The infrastructure decision is cheap; the language decision is data-driven; the maintenance is permanent. The compounding payoff: every additional locale at $X cost produces revenue from a market the English-only product was completely missing.