VibeWeek
Home/Grow/Data Residency & Region Pinning — Chat Prompts

Data Residency & Region Pinning — Chat Prompts

⬅️ Back to 6. Grow

If you're a B2B SaaS in 2026 selling internationally — especially into Europe, healthcare, financial services, or government — you'll hit data residency requests within your first 100 enterprise deals. The customer's question: "Where, geographically, is our data stored and processed?" The expected answers in 2026 are precise: "Customer data for your tenant lives in our EU region (eu-central-1, Frankfurt)." Or "...in our US region (us-east-1, N. Virginia)." Or both, with a per-tenant choice. Vague answers ("our data lives in AWS") fail enterprise security reviews and lose deals.

Data residency is more than where your DB lives. It's about every component: primary database, backups, search indexes, file storage, logs, telemetry, queue messages, third-party processors, and even where your support team accesses data from. Get one wrong (e.g., logs flowing to a US-based observability provider while the rest is in EU) and you fail the GDPR review.

The naive shape: "We're on AWS — that counts as residency, right?" — No. Residency means tenant data NEVER leaves a specific region. The complete shape: per-tenant region pinning, regional data planes, regional backups, region-aware logging, sub-processor policies, and operational discipline. This chat walks through the architecture and implementation.

What you're building

  • Per-tenant region selection (EU / US / APAC / GovCloud / etc.)
  • Regional data planes (separate Postgres / S3 / Redis per region)
  • Regional service deployment (your app servers per region)
  • Cross-region routing (request goes to correct region)
  • Regional backup + DR
  • Regional logs / observability
  • Sub-processor mapping (what third-parties see what data)
  • Customer-facing region documentation
  • Audit trail of data location per tenant
  • Operational runbooks for region-specific incidents

1. Decide the scope of "residency" you're committing to

Help me decide what scope of data residency to ship.

Three increasingly-strict levels:

LEVEL 1: REGIONAL HOSTING (the baseline / minimum)
- Your primary database lives in a specific region
- ALL tenants share the region (no per-tenant choice)
- "We're in eu-central-1" or "We're in us-east-1"
- Pros: simplest; one deployment
- Cons: doesn't match customer "I'm EU; my data must stay in EU" requests
- Sufficient for: many SMB B2B SaaS in 2026

LEVEL 2: MULTI-REGION TENANT-PINNED (what most enterprise customers actually want)
- You operate distinct regional deployments (US, EU, sometimes APAC)
- Each tenant is pinned to ONE region
- Tenant data NEVER crosses regions
- Sub-processors per region (e.g. Cloudflare in both, but Stripe→US for US tenants, Stripe→EU for EU tenants)
- Pros: matches enterprise expectations
- Cons: 3-5x operational cost; multi-region complexity
- Sufficient for: serious enterprise B2B in 2026

LEVEL 3: COUNTRY / SOVEREIGN-CLOUD-LEVEL (advanced; rare)
- Beyond region — specific country, sometimes specific data center
- AWS GovCloud, Azure for Government, dedicated tenancy
- Air-gapped deployments for some customers
- Pros: needed for defense, government, some regulated finance
- Cons: massive operational cost; per-customer ops; high pricing
- Sufficient for: gov/defense customers willing to pay for it

LEVEL 4: BRING-YOUR-OWN-CLOUD (your software in customer cloud)
- You ship a deployable artifact; customer runs in their cloud
- Different product entirely (single-tenant; on-prem-style)
- Common for top-of-market enterprise SaaS

DEFAULT FOR B2B SaaS GROWING INTO ENTERPRISE:
- Year 1-2: Level 1 (single region; usually US)
- Year 3+ (when EU enterprise deals surface): Level 2 (US + EU regions)
- Selectively: Level 3 for specific customers willing to pay 5-10x
- Rarely: Level 4 (huge product change)

What you ship FIRST:
- Level 1 → Level 2 migration when you have:
  a. 5+ EU enterprise prospects who require it, OR
  b. 1+ EU enterprise customer paying 30%+ premium for it
- Don't pre-build Level 2 without a buyer

For each decision, ask:
- Which regions? (US, EU, then sometimes APAC, then maybe specific countries)
- One tenant = one region forever, or migration allowed?
- Who picks region — customer or auto-assigned?

Output: a written scope statement that engineering, sales, and security agree to.

Output: a clear scope decision; avoids "we said residency" meaning different things.

2. Design the multi-region architecture

For Level 2 (multi-region tenant-pinned), design the architecture.

Per-region deployment includes ALL of:

Layer 1: Compute
- App servers / API servers (in the region)
- Background workers / queues (in the region)
- Cron jobs (in the region)
- Each region is a complete app deployment

Layer 2: Data
- Primary database (Postgres) in region
- Read replicas in region (NOT cross-region)
- Object storage (S3 bucket / equivalent) in region
- Cache (Redis) in region
- Search index (OpenSearch / Algolia / Typesense) in region
- Vector DB (if applicable) in region
- Queue (SQS / similar) in region

Layer 3: Sub-processors / external services
- Per-region routing for: payment (Stripe US vs EU), email (Resend / SendGrid configured per-region), analytics (PostHog Cloud per region), AI providers (OpenAI/Anthropic — region-aware endpoints)
- Document each: which sub-processor + which region
- DPA / SCC adjustments per region

Layer 4: Edge / CDN
- Cloudflare / Vercel / CloudFront — global by nature
- Configure to route requests to correct regional backend
- Static assets can be globally cached (no PII in static assets)

Layer 5: Observability / Logs
- Logs from EU region → EU log store
- Logs from US region → US log store
- DO NOT centralize logs to US datadog if EU residency required

Layer 6: Backups / DR
- EU primary backup → EU (different AZ or sub-region)
- DO NOT cross-region replicate for DR
- DR plan: same-region failover only

Layer 7: Admin / Ops access
- Engineers access EU data via region-specific bastion / SSO
- Audit who-accessed-what-from-where
- US-based engineer accessing EU data is a question for your DPA

Per-tenant routing layer:

tenant_routing (
  tenant_id  uuid pk
  region     text  -- 'us-east-1', 'eu-central-1', 'ap-southeast-2', etc.
  data_plane_url text  -- 'https://us.api.yourco.com'
  pinned_at  timestamptz  -- when pinning was set
  pinned_by  uuid  -- who decided
  status     text  -- 'active', 'migrating', 'archived'
)

Routing flow:
1. Request arrives at edge (e.g., yourco.com)
2. Auth handshake at edge → identifies tenant
3. Edge looks up tenant_routing → forwards request to correct region
4. Or: client uses region-specific subdomain (us.yourco.com / eu.yourco.com)
5. Region-pinned subdomain is most common for enterprise (transparent + auditable)

Two models for region selection:

A. Auto-assigned at signup
   - System guesses based on user's IP / browser locale
   - User can override during onboarding
   - Lock after first paid invoice

B. Customer-chooses at sales / contract time
   - Enterprise: contract specifies region
   - Self-serve: explicit region choice in signup form

For multi-region, model B is dominant for enterprise; A for self-serve.

Walk me through:
1. The full architecture diagram (text or mermaid)
2. The per-region deployment topology
3. The tenant routing schema
4. The edge routing logic
5. The sub-processor mapping per region
6. Where region-pinning is enforced (DB query level? Network level? App level?)

Output: an architectural plan engineers can execute.

3. Implement the routing layer

Now implement the per-tenant region routing.

Two architectural choices:

OPTION 1: Region-specific subdomains
- US tenants use https://us.yourco.com
- EU tenants use https://eu.yourco.com
- Pros: transparent; clear; URL itself proves residency
- Cons: tenants see different URLs (UX impact); custom-domain customers need region-specific CNAME
- Recommended for enterprise residency

OPTION 2: Single domain with edge-routing
- All tenants use https://yourco.com
- Edge layer (Cloudflare Worker / Vercel Middleware) reads tenant info → forwards to region
- Pros: clean UX; no URL change
- Cons: edge layer must be multi-region-aware; one extra hop
- Also valid; sometimes preferred for self-serve

For OPTION 1 (subdomains):

Implementation:
- Provision yourco.com (marketing site; landing page; signup flow)
- yourco.com signup → asks region preference → assigns subdomain
- yourco.com → Region selector page (NOT deep app)
- us.yourco.com → US data plane (full app)
- eu.yourco.com → EU data plane (full app)
- ap.yourco.com → APAC data plane (full app)

Auth + session handling:
- Each region has its own auth/session store
- User cannot be authenticated across regions
- Login at us.yourco.com only sees US tenants
- Switching tenants across regions = logout + login

Custom domain handling:
- Customer wants tenant.theircompany.com
- They CNAME tenant.theircompany.com → us-tenants.yourco.com (or eu-tenants.yourco.com)
- Each region exposes a "custom-domain" CNAME target
- TLS certificates issued per region

For OPTION 2 (edge-routing):

Implementation:
- yourco.com is the only domain
- Edge worker (Cloudflare / Vercel Middleware) on every request:
  a. Identifies tenant (from session, subdomain, header, etc.)
  b. Looks up tenant_routing.region from a globally-replicated routing table (Cloudflare KV, DynamoDB Global Tables, etc.)
  c. Proxies request to correct regional backend
- The routing table itself does NOT contain customer data; only metadata (which region)

Cross-region routing pitfalls:
- The edge layer needs LOW-LATENCY tenant-region lookup (KV or in-memory cache)
- Routing decisions cached at edge for 60s (invalidate on region change)
- Login flow: where does the session live? Region-specific.
- Session cookie scoped to region's subdomain (not global)

Implement:
1. The region routing table (replicated globally)
2. The routing layer (edge worker or subdomain DNS)
3. The auth flow per region
4. The custom-domain handling
5. The "switch region" UX (when a user has tenants in multiple regions)
6. Cross-region migration support (rare but happens)

Output: routing that respects residency.

4. Build the customer region-selection UX

Customers want clarity about region. The UX matters.

UI surfaces:

1. Sign-up flow (self-serve):
   - Region selector: US / EU / APAC dropdown
   - Default: detected from IP
   - Tooltip: "Your data will be stored in this region"
   - Lock: explicit ack that "this cannot be changed later" (or document migration cost)

2. Enterprise-sales flow:
   - Region is a contract negotiation point
   - Sales team has region-by-customer assignment workflow
   - CSM / onboarding: provisions tenant in correct region

3. Settings page (post-signup):
   - "Data Region" field shown prominently
   - "Your data is stored in: [Frankfurt, Germany]" with detail
   - "Data plane URL: eu.yourco.com" (visible)
   - "Sub-processors used:" with list
   - Link to your data-residency policy + DPA
   - For enterprise: "Request region migration" button (opens ticket)

4. Compliance documentation:
   - /trust or /security page
   - List of regions
   - Sub-processors per region with their location
   - DPA + SCCs per region
   - Audit reports per region

5. In-app tenant switcher (rare):
   - User in two tenants in different regions sees a region badge
   - Switching means re-auth (different domain/session)

Implement:
1. The region selector component
2. The detection-via-IP default
3. The region settings page
4. The trust/compliance public page
5. The "switch tenant cross-region" UX

Anti-patterns:
- Hiding region info ("you don't need to know")
- Auto-changing region without telling customer
- Vague "we comply with GDPR" instead of specific region
- Treating region as a marketing feature with no enforcement

Output: customer-facing region UX that builds trust.

5. Engineer for "data never leaves region" enforcement

The hardest part: GUARANTEEING data doesn't cross regions.

Common leakage paths to close:

A. Cross-region database queries
   - Don't allow direct cross-region DB connections
   - Network ACLs: each region's DB only accepts from same-region app servers
   - Verify in CI: tests that try to connect cross-region must fail

B. Cross-region S3 reads
   - Bucket policies: deny cross-region access
   - Use STS roles that are region-scoped
   - Audit S3 access logs for cross-region attempts

C. Logs / observability
   - Logs from EU app → EU log store; never US
   - Common bug: Datadog default endpoints route to US; configure EU endpoint
   - Sentry, LogRocket, etc.: each has EU + US options; configure per-region

D. AI / LLM API calls
   - OpenAI EU residency endpoints (eu.api.openai.com) for EU tenants
   - Anthropic offers EU + US separately; use the right one
   - Vercel AI Gateway: configure region routing
   - DO NOT send EU tenant data to US LLM endpoint by default

E. Email / Comms providers
   - Resend, SendGrid, Postmark: each has EU + US options
   - Send email from EU SMTP for EU tenant emails

F. Payment (Stripe)
   - Stripe US-account vs Stripe EU-account
   - Customer records pinned to region
   - Sub-processor disclosure on DPA

G. Analytics / event ingestion
   - PostHog, Mixpanel, Amplitude all have EU regions
   - Configure your client + server SDK to send to EU when EU tenant

H. Webhooks
   - Outbound webhooks: enforce that customer's webhook URL is in same region (or just send from regional egress)
   - Inbound webhooks: route to correct regional backend by tenant

I. Admin / Ops access
   - Engineers in US accessing EU data: log + audit
   - Some compliance regimes require: only EU-based engineers access EU data
   - This is operational; get policy clear early

J. Backups + DR
   - Backup destination same region (or sub-region)
   - DO NOT use cross-region S3 replication for backup convenience
   - DR plan: same-region failover; never cross-region failover for residency-strict tenants

K. Caches / Redis
   - Redis cluster per region
   - Don't share global Redis

L. Search index
   - Algolia: separate index in each region (different API key)
   - Typesense: separate cluster per region
   - OpenSearch: separate domain per region

M. Vector DBs
   - Pinecone, Weaviate: configure region-pinned index per tenant
   - Embeddings of EU customer data → EU vector DB only

Enforcement mechanism:

Code-level:
- Every regional service is a separate deployment
- No single binary serves multiple regions
- DB connection strings + S3 buckets + API endpoints scoped per region in env vars
- Don't allow cross-region calls in code (would error at network layer anyway)

Network-level:
- Region VPCs (AWS) / VNets (Azure) / VPCs (GCP) are isolated
- Cross-region peering blocked unless explicit + audited
- Egress firewall: each region's app can only reach same-region resources + listed external services

Audit:
- Quarterly review of: "what services do we use, what region do they store data, which tenants flow through them?"
- Annual: external auditor verifies (SOC 2 Type II, ISO 27001)

Walk me through:
1. The per-region deployment topology (no shared infrastructure across regions)
2. Network ACLs that enforce isolation
3. Sub-processor inventory + region mapping
4. CI tests that verify isolation
5. The internal-access-audit process

Output: real isolation, not just policy.

6. Handle tenant migration between regions (rare; but happens)

A customer asks: "We're moving HQ from US to EU; can our tenant move regions?"

This is hard. It's possible but operational. Document the policy and the cost.

Migration scenarios:
1. Customer voluntarily wants to migrate (HQ change; M&A; new compliance posture)
2. You're consolidating regions (deprecating one; rare)
3. Customer started in wrong region accidentally (early-stage; just-fix-it)

Migration process:

Step 1: Customer formally requests migration
- Ticket / contract amendment
- Often: legal sign-off from both parties

Step 2: Quote the cost
- It's NOT free for you
- Typical: 1-4 weeks of engineering effort + downtime window
- Charge the customer (or absorb for top-tier enterprise)

Step 3: Plan downtime
- Migration is typically a planned-maintenance window
- 2-24 hours of downtime depending on data volume
- Schedule: weekend or low-usage window for that customer

Step 4: Pre-migration
- Disable writes (read-only mode for tenant)
- Snapshot all relevant data: DB rows, S3 files, search indexes, vector data
- Bundle into a migration manifest

Step 5: Cross-region transfer
- Transfer manifest to target region
- Use AWS Data Transfer / GCP Transfer / your cloud's tooling
- Verify checksums

Step 6: Restore in target region
- Restore DB tables (with FK constraints respected)
- Restore S3 objects
- Rebuild search indexes
- Rebuild caches
- Verify data integrity

Step 7: Routing cutover
- Update tenant_routing.region
- Update DNS / edge routing
- Customer URLs may change (us.→ eu.)

Step 8: Post-migration
- Validate all customer flows work
- Re-enable writes
- Monitor closely for 7 days
- Eventually delete data from old region (after retention period)

Hardest parts:
- Foreign keys to global resources (e.g., a shared "billing" service)
- Sub-processor data also needs migration (Stripe, etc. — mostly handled by their own regional setup)
- Embeddings / vector DBs: usually need full re-indexing
- Audit logs: do you migrate, or split (old region history stays; new region starts fresh)?

Implement:
1. The migration request workflow
2. The data-bundling tooling
3. The migration runbook
4. The cutover process
5. The post-migration validation suite

Anti-patterns:
- Doing this ad-hoc per customer (build it as a controlled process)
- Promising migration as a free feature (it's not free; charge)
- Skipping the validation step (data corruption goes unnoticed for weeks)
- Letting old data linger forever in source region (defeats residency)

Output: a migration capability that's documented + chargeable.

7. Document for compliance + sales

Sales deals depend on residency documentation that's clear and defensible. Build it.

Documents to publish:

1. /trust or /security page (public)
   - "We operate in regions: US (us-east-1), EU (eu-central-1), [APAC if applicable]"
   - "Tenant data stays in the assigned region. We do not replicate cross-region."
   - List of sub-processors per region
   - SOC 2 Type II report (per region if separate)
   - GDPR / DPA addendum
   - SCCs (Standard Contractual Clauses)
   - Privacy policy that addresses residency
   - Per-region data flow diagrams

2. Internal Sales Enablement
   - Battle card: "Why our residency story is enterprise-ready"
   - Common questions + answers
   - Comparison vs. competitors who claim residency without delivering
   - When to escalate: customer asks for sovereign cloud / on-prem

3. Per-customer DPA
   - Tenant-specific data processing agreement
   - Region commitment in writing
   - Sub-processor list relevant to their region
   - Adjustable based on enterprise negotiation

4. Compliance audit trail
   - Annual audit: who accessed what, when, where (per region)
   - SOC 2 / ISO 27001 reports per region
   - Penetration test results per region

Implement:
1. The /trust public page
2. The DPA template per region
3. The sub-processor list (with regions)
4. The audit-data pipeline
5. The annual-update calendar (data residency story refreshes yearly)

Output: documentation that closes enterprise deals.

8. Operational realities

Walk me through the edge cases:

1. New region launch
   - Lead time: 3-6 months from "we need APAC" to "APAC GA"
   - Steps: infra provisioning, security review, audit, sub-processor agreements, sales enablement, marketing
   - Cost: ~$50K-200K to launch a new region (engineering + audit + setup)

2. Region-specific outage
   - EU region down; US region fine
   - Status page must show region-specific status
   - Customer comms scoped to EU customers
   - DR plan: SAME-region failover (different AZ)

3. Sub-processor leak (e.g., third-party tool routes EU data through US)
   - Discovered during quarterly audit
   - Stop using that vendor for EU OR they offer EU region
   - Customer comms if material breach

4. Engineer in US debugging an EU customer issue
   - Policy: who can access EU data?
   - Some compliance: EU-only engineers; others: documented + audited
   - Ticket-driven access: time-boxed with audit

5. Customer requests a region you don't yet offer
   - Don't promise it; document the request; build pipeline of demand
   - When 5+ customers request same region, plan launch

6. Compliance change in a region
   - GDPR amendment, EU-US Data Privacy Framework changes, China cybersecurity law update
   - Watch for regulatory news per region
   - Adjust DPAs / sub-processor list as needed

7. GovCloud customer
   - Government customer requires AWS GovCloud / Azure Gov
   - Likely a separate deployment + separate operational model
   - Pricing premium 5-10x

8. M&A between regions
   - Customer A (in US region) acquires Customer B (in EU region)
   - Now they have data in two regions
   - Common solution: keep separate tenants OR plan migration

9. Self-serve user signs up in wrong region
   - Detect early-stage; offer free re-region while data volume is small
   - Lock in after 30 days or first paid invoice

10. New regulatory requirement
    - Country adds residency requirement (e.g. India 2023 personal data law)
    - You may need to launch new region or exit market
    - Plan: region-launch criteria includes regulatory landscape

11. Sub-processor consolidation
    - Vendor reorganizes regions (e.g. Resend launches new EU region after you launched)
    - Migrate to native EU sub-processor; document change

12. Audit log location for cross-region operations
    - Migration audit: where does the audit log of "this customer migrated" live?
    - Decision: in the source region's archive + new region's start

For each, walk me through code change + customer comms + audit impact.

Output: operational maturity that survives real customers.

9. Recap

What you've built:

  • Multi-region deployment (compute, data, sub-processors, observability)
  • Per-tenant region routing (subdomain or edge-based)
  • Customer region-selection UX
  • Enforcement: cross-region traffic blocked at network + code level
  • Tenant migration capability (chargeable, documented)
  • Compliance documentation (DPA, sub-processors, audit reports)
  • Per-region observability + status page
  • Operational runbooks per region
  • Sales enablement on residency story

What you're explicitly NOT shipping in v1:

  • Sovereign cloud / GovCloud — defer until specific customer pays for it
  • On-prem deployment — different product entirely
  • Bring-your-own-cloud (BYOC) — different architecture
  • Per-row residency (one tenant, two regions) — almost never right
  • Real-time global replication WITH residency — these conflict; pick a side

The biggest mistake teams make: claiming residency when they don't truly enforce it. Customer security review will catch this; deal lost; trust gone.

The second mistake: building multi-region before you have customer demand. Multi-region is 3-5x operational cost; don't pre-invest. Build when you have 5+ EU prospects requiring it.

The third mistake: forgetting sub-processors. Your DB is in EU, but Datadog and Stripe and OpenAI are routing US-side. The chain is only as residency-compliant as the weakest link.

See Also