VibeWeek
Home/Grow/Multi-Tenant Data Isolation: Architecture That Survives Enterprise Procurement

Multi-Tenant Data Isolation: Architecture That Survives Enterprise Procurement

⬅️ Growth Overview

Multi-Tenancy for Your New SaaS

Goal: Design and ship a multi-tenancy model that cleanly isolates customer data, scales from indie to enterprise without rewrites, and answers security-review questions in 30 seconds. Avoid the failure mode where a tenant-isolation bug leaks Customer A's data to Customer B — the kind of incident that ends companies.

Process: Follow this chat pattern with your AI coding tool such as Claude or v0.app. Pay attention to the notes in [brackets] and replace the bracketed text with your own content.

Timeframe: Architecture decision in 1 day. Tenant model wired into the database in 1-2 days. Application middleware enforcing isolation in week 1. Security testing + audit hooks in week 2. Ongoing tenant-isolation discipline baked into every PR review thereafter.


Why Most Founder Multi-Tenancy Goes Sideways

Three failure modes hit founders the same way:

  • The "we'll add proper tenant isolation later" plan. Founder ships a single-tenant-shaped product, then bolts on a team_id column when the second customer signs up. By month 12, half the queries forget the team_id filter; the other half "remember" but with subtle bugs (a JOIN that doesn't carry the filter, a background job that runs unscoped). The cleanup is a multi-month engineering project — and it has to happen before any enterprise customer asks "show me how data is isolated."
  • Row-level security as performative theater. The team turns on Postgres RLS, configures a few policies, and declares the system "secure." But the application code uses a service-role connection that bypasses RLS, the JWT claims aren't validated correctly, and a SQL injection in any query exposes everyone's data anyway. RLS is real security only when paired with disciplined application-layer enforcement.
  • The "schema-per-tenant" rabbit hole. Founder reads a Hacker News post about multi-tenancy and decides to give every customer their own Postgres schema. By customer 50 they have 50 schemas, 50 sets of migrations to run on every change, and a heroic SQL script that handles deploys. Schema-per-tenant has narrow legitimate use cases; for 95% of indie SaaS in 2026, it's overengineering.

The version that works is structured: pick the simplest tenancy model that fits the product (usually shared-DB-shared-schema with a tenant_id column), enforce isolation at the application middleware layer with optional database-layer defense-in-depth, test isolation continuously, and document the model for security reviews.

This guide assumes you have already done Data Trust (multi-tenancy is part of the trust artifact set), have considered Public API (tenant isolation extends to API auth), and have shipped Audit Logs (the audit trail is per-tenant scoped).


1. Pick the Tenancy Model

Three models exist. Pick deliberately based on actual requirements, not aspirational ones.

You're helping me pick the multi-tenancy model for [your product] at [your-domain.com]. The product is [one-sentence description]. My customer profile is [B2C / B2B SMB / B2B mid-market / B2B enterprise]. Expected customers in year 1: [N].

The three models:

**A. Shared database, shared schema, tenant_id column** ("pool" model)
- Single database, single schema, every tenant-owned row has a `tenant_id` (or `team_id` / `account_id` / `workspace_id`)
- Application enforces "WHERE tenant_id = ?" on every query
- Optional: Postgres RLS as defense-in-depth
- Pros: Simplest operations (one DB to manage, one set of migrations, one backup process), best resource efficiency, lowest cost, most flexibility for cross-tenant analytics
- Cons: Per-tenant performance isolation is harder, "noisy neighbor" risk, isolation depends on application discipline

**B. Shared database, separate schemas per tenant** ("bridge" model)
- One Postgres database, but each tenant has their own schema (or set of tables)
- Migrations apply per schema
- Pros: Stronger isolation than pool, relatively simple operations
- Cons: Per-tenant migrations are real work (especially with 100+ tenants), schema management complexity, weaker cross-tenant analytics

**C. Separate database per tenant** ("silo" model)
- Each tenant has their own physical database (often their own RDS instance, or their own Postgres in their own VPC)
- Pros: Strongest isolation, tenants can have different DB configs, sales-led enterprise customers can demand "their own database"
- Cons: Operational complexity multiplies (every tenant adds DB management), expensive at small scale, migrations are per-DB, cross-tenant analytics requires a separate warehouse

For most indie SaaS in 2026, the answer is A (pool model) for the entire customer base, with optional silo for the largest enterprise contracts ("dedicated tier"). Schema-per-tenant (B) is rarely the right choice — it has the operational cost of silo without the full isolation benefit.

Output:
1. The recommended model for my situation
2. The decision tree for when to upgrade specific customers to silo (typically: contract value, regulatory requirement, or stated isolation need)
3. The architectural drawing showing how the model maps to my [database choice — likely Postgres via Supabase / Neon / etc.]
4. The trade-offs I'm explicitly accepting

Sanity check: if my product is consumer (B2C) where each user is effectively a "tenant" of one, the model is degenerate — every user is a tenant by design. Don't overcomplicate.

Three principles I've watched founders re-learn:

  • Default to pool model. Unless you have a specific regulatory or contract requirement for silo isolation, pool model is correct for 95% of indie SaaS. Adding hybrid silo for specific enterprise customers later is straightforward; starting with silo for everyone is operational hell.
  • Don't pick schema-per-tenant for "future flexibility". It's the worst of both worlds in most cases.
  • The tenant_id name matters less than consistency. Pick account_id or team_id or workspace_id based on the user-facing concept; use it identically everywhere. Mixing names (some tables team_id, others tenant_id) creates query-bug risk.

2. Wire Up the Tenant Identifier and Schema

Before any application code, design the data model with the tenant column as a first-class citizen.

Help me design the database schema for the pool tenancy model.

Schema rules:

1. **Every tenant-owned table includes the tenant column** (let's call it `account_id` for the rest of this design):
   - `account_id` is NOT NULL on every tenant-owned table
   - It's the FIRST or SECOND column (after `id`) for visibility
   - Foreign key to the `accounts` table

2. **Tables that are NOT tenant-owned**:
   - Global reference data (countries, currencies, system feature flags)
   - The `accounts` table itself (top of the tree)
   - The `users` table (users CAN belong to multiple accounts, so user is not directly tenant-owned)
   - The `account_memberships` table (the join table connecting users to accounts)

3. **Indexing**:
   - Every tenant-owned table has an index leading with `account_id`
   - For high-cardinality tables (events, audit logs), use `(account_id, created_at desc)` or similar composite indexes
   - Skip the index on tables with <10K rows; add when needed

4. **Foreign keys**:
   - Cross-tenant foreign keys are FORBIDDEN by design
   - A `posts` row pointing to a `comments` row from a different account is the kind of bug that leaks data
   - Add a CHECK constraint or a database trigger that asserts `posts.account_id = comments.account_id` on insert/update if the relationship crosses tables that both have account_id

5. **The `accounts` table itself**:
   - id (UUID or prefixed opaque ID like "acct_01HXX..." — prefer the latter for readability)
   - slug (URL-safe identifier the user sees in URLs: yourdomain.com/[slug]/...)
   - name (display name)
   - plan (tier the account is on)
   - created_at, archived_at (soft-delete on archive)
   - billing-related fields linked to Stripe customer ID
   - feature flags / quotas if not stored separately

Output:
1. The full schema for accounts + account_memberships + a sample tenant-owned table (e.g., projects, posts, conversations)
2. The migration script for an existing single-tenant schema
3. The naming convention enforcement rule (linter or schema check)
4. The check-constraint examples that prevent cross-tenant foreign keys

Then handle the corner case: if some data is genuinely shared across all accounts (templates, public-facing content), keep it in dedicated tables with no `account_id` column at all. This makes "this is shared" intentional rather than implicit.

Two principles:

  • Account_id as a first-class column, not an afterthought. When account_id is in the second slot of every tenant-owned table, queries naturally include it. When it's in the 14th slot, devs forget about it.
  • Cross-tenant foreign keys are the highest-risk bug pattern. A single FK that points to a different tenant's row is a data-leak bug. Enforce same-tenant FKs at the database level with constraints or triggers; don't rely solely on application-layer discipline.

3. Enforce Tenant Scoping in the Application Layer

The most consequential code in your application is where you scope queries by tenant. Get this right at the framework level.

Help me implement tenant-scoped database access for [your stack — Next.js / SvelteKit / Hono / your framework] using [Postgres via Supabase / Neon / your DB].

Pattern: every database operation must run in a "tenant context" that automatically includes the account_id filter.

Three layers:

**Layer 1: Authentication middleware extracts the current account**
- Resolve the user from the session / JWT
- Resolve the account from URL slug / API token / session-stored active workspace
- Validate the user has membership in the account (via account_memberships)
- Reject with 404 (not 403 — to avoid revealing account existence) if not a member
- Store both user_id and account_id on the request context

**Layer 2: Database client wrapper with tenant binding**
```ts
// Pseudocode pattern
function getTenantDb(accountId: string) {
  return db
    .with({ accountId })
    .withMiddleware((query) => addAccountIdFilter(query, accountId))
}

// Usage in route handler
const tenantDb = getTenantDb(req.context.accountId)
const projects = await tenantDb.projects.findMany()  // automatically filters by account_id
  • Use Drizzle / Prisma / Kysely middleware to inject the WHERE clause
  • Or use Supabase RLS with a Postgres role that has a tenant context set via SET LOCAL

Layer 3: Defense-in-depth via Postgres RLS (optional but recommended for production)

  • Enable RLS on every tenant-owned table
  • Define a policy: USING (account_id = current_setting('app.current_account_id')::uuid)
  • Set the variable at request time: SET LOCAL app.current_account_id = '<the-account-id>'
  • Even if application-layer code fails to filter, RLS rejects the query

Output:

  1. The middleware code that extracts account_id from the request
  2. The database client wrapper that auto-applies tenant filtering
  3. The Postgres RLS policy templates
  4. The "service role" pattern for admin operations that legitimately span tenants (clearly marked, audit-logged, separate code paths)
  5. Tests that verify: a query without tenant context fails, a query with the wrong tenant context returns no results, an admin query with the service role works

Critical: the service-role connection (which can read all tenants) must be used only by clearly-marked admin paths. Customer-facing requests must NEVER hit the service-role connection. This is the single most important architectural rule in multi-tenant systems.


Three rules I've watched founders re-learn:

- **Tenant scoping at the framework level, not the route level.** If every route handler has to remember to filter by tenant, half of them won't. The middleware applies the filter automatically.
- **Defense-in-depth with RLS is worth the setup cost.** Even if your application is perfect, a SQL injection somewhere bypasses application-layer filtering. RLS catches it. Belt and suspenders.
- **Service role is dangerous; isolate it.** A service-role connection that bypasses tenant filtering is a footgun. Confine it to clearly-marked admin code paths, audit-log every use, and review every PR that touches it.

---

## 4. Handle Cross-Tenant Operations Carefully

Some operations legitimately need to span tenants — admin tools, analytics, support actions. Design these carefully.

Help me design the cross-tenant operation patterns.

Three categories of cross-tenant operations:

1. Admin / support actions (you logging into a customer's account to debug, support-admin viewing data, etc.)

  • ALWAYS audit-logged with actor_type=support_admin and the target_account_id
  • Customer-visible in their audit log per Audit Logs
  • Performed via a separate code path that uses the service role
  • Authorization: only specific admin roles can trigger; revoke access by default

2. Analytics queries (dashboards aggregating across all customers, business metrics)

  • Run against a read replica or the data warehouse, NOT the live application database with tenant filtering off
  • Or: use the service role only in dedicated analytics jobs that NEVER touch user-facing endpoints
  • Output is aggregated (counts, sums) — never per-row data with PII

3. Background jobs operating across tenants (sending daily summary emails, processing all accounts' usage)

  • Iterate explicitly per-account (loop over accounts, set context, run job for that account)
  • Avoid "SELECT * across all tenants" patterns that bypass filtering
  • Each iteration sets the tenant context the same way as a request would

Output:

  1. The admin-action pattern with audit-log integration
  2. The analytics-query pattern (likely Read replica + warehouse)
  3. The background-job iteration pattern
  4. The PR-review checklist: every PR that uses the service role gets explicit reviewer flag
  5. The CI check: any code path using the service-role connection must be tagged or annotated, otherwise the build fails

The most consequential rule: **the customer-facing request path must never use the service role.** Even if it makes a query "easier", it removes the tenant-isolation guarantee. Reviewers should reject any customer-facing PR that bypasses the tenant-scoped client.

---

## 5. Test Tenant Isolation Continuously

A multi-tenant bug is a data-leak bug. Test for it as primary, not as an afterthought.

Design the tenant-isolation test suite.

Required test categories:

1. Unit: middleware correctness

  • The auth middleware correctly extracts account_id from session, JWT, API token
  • A request with no auth context fails closed
  • A user attempting to access an account they're not a member of gets 404
  • Cross-account access via URL manipulation fails (e.g., trying to access /accounts/B/data while logged into A)

2. Integration: per-route tenant isolation

  • For every state-reading route, create two test accounts with different data, query route as account A, assert no data from account B is returned
  • For every state-writing route, write data as A, attempt to read it as B, assert 404

3. Integration: cross-tenant FK protection

  • Attempt to create a record in account A that references a record in account B (e.g., a comment in account A pointing to a post in account B)
  • Assert the database rejects (via FK + check constraint or trigger)

4. Integration: RLS policy verification

  • Connect with a non-service Postgres role
  • Set tenant context to account A
  • Verify SELECT returns only account A data
  • Verify INSERT fails for account B's account_id
  • Verify SELECT * with no tenant context returns zero rows (RLS denies access by default)

5. Integration: service role audit

  • Every code path that uses the service role is annotated/tagged
  • Tests assert: routes accessible to customers do NOT use the service role
  • Tests assert: admin/analytics paths that DO use the service role are accompanied by audit logging

6. Penetration / red-team tests

  • Simulate SQL injection: even successful injections should not leak cross-tenant data
  • Simulate JWT tampering: forged or modified JWTs should fail to authenticate
  • Simulate API token replay: a token belonging to account A should never authorize access to account B

7. CI gates

  • Build fails if a tenant-owned table is added without account_id
  • Build fails if a customer-facing route imports the service role
  • Build fails if a query is added without going through the tenant-scoped client
  • Linter rule: no raw SQL strings; all queries via the framework client

Output:

  1. The full test plan as a markdown checklist
  2. The CI gate scripts
  3. The annotation / tagging convention for the service role
  4. The red-team test scenarios as runnable test code
  5. The schedule: run isolation tests on every PR; pen-test scenarios weekly

The single most important automated test: **the per-route cross-account check.** For every state-reading endpoint, the test creates two accounts, calls the endpoint as A, and asserts no B data is returned. Without this, isolation bugs ship silently.

---

## 6. Tenant-Aware Logging, Metrics, Background Jobs

Multi-tenancy isn't just data; it's every dimension of your operations.

Help me apply tenant-awareness across the broader operational surface.

Logging:

  • Every application log line includes account_id when available
  • Sensitive fields (PII, secrets) redacted regardless
  • Log aggregation tool (Sentry, BetterStack, Axiom) treats account_id as a queryable dimension
  • Don't log entire request bodies (PII risk); log specific structured fields

Metrics:

  • Per-tenant metrics for: request counts, p99 latency, error rate, database query time, queue depth
  • Tagged by account_id (or hashed for cardinality if too many tenants)
  • Alerting on per-tenant anomalies: a single tenant's error rate spiking is a signal worth surfacing

Background jobs:

  • Per Background Jobs Providers: scheduled / triggered jobs include account_id in their context
  • Iteration over all accounts uses the service-role pattern from step 4 — explicit loop, not unbounded query
  • Job retries don't leak across tenants (a failed job for account A doesn't accidentally retry for account B)

Cache keys:

  • Every cache entry namespaced by account_id: cache:account_{id}:resource_{id}
  • Critical: cross-tenant cache poisoning is a real attack vector if cache keys aren't tenant-scoped
  • Cache invalidation on tenant deletion (don't leave stale data accessible)

Webhook deliveries:

  • Per Public API: outbound webhook events scoped to the originating tenant
  • Inbound webhooks (e.g., Stripe) routed to the correct tenant via metadata or customer-ID lookup

Email sending:

  • Per Email Deliverability: emails clearly identify which account they're about
  • Don't accidentally CC across tenants
  • Templated emails resolve account-specific data per recipient

Output:

  1. The structured-logging schema with account_id as a required field
  2. The metrics dashboard config tagged by account_id
  3. The background-job tenant-context pattern in code
  4. The cache key naming convention
  5. The cross-tenant audit checklist: weekly review of metrics for "is anything cross-leaking"

The most overlooked surface: **cache.** A misnamed cache key (e.g., `user_dashboard:user_id_X` instead of `account_Y:user_dashboard:user_id_X`) means user X's dashboard data could be served from another account's cache. Namespace everything.

---

## 7. Handle Tenant Lifecycle: Provisioning, Deletion, Export

The boring parts of multi-tenancy that founders skip until they hit problems.

Design the tenant-lifecycle operations.

Provisioning (account creation):

  • Atomic operation: account row created + initial owner user added to account_memberships + Stripe customer + initial workspace data (if any)
  • If any step fails, the whole creation rolls back (otherwise you have orphan accounts in inconsistent states)
  • Welcome flow per Onboarding Email Sequence starts after successful provisioning

Membership management:

  • Inviting users to an account: invitation tokens with TTL, must be accepted, membership row created on acceptance
  • Removing users: soft-delete the membership; their data attribution remains; their access stops immediately
  • Role changes: audit-logged; affected user notified

Account suspension (e.g., payment failure, ToS violation):

  • Suspended accounts: users can log in but cannot create/modify data; read-only mode
  • Notification to all account members about suspension reason
  • Resume: reverse the read-only flag

Account deletion (hard or soft):

  • Soft delete (default): account.archived_at set; data retained for grace period (typically 30-90 days); access blocked
  • Hard delete: actual data removal across all tables (account-owned rows + account row + memberships)
  • Cascade discipline: every tenant-owned table must be deletable via cascade or explicit deletion in the right order
  • Test the deletion path: it's never tested in early development, then surprise-fails when the first GDPR request lands

Data export (per Data Trust):

  • Customer requests export → background job generates a ZIP of all account-owned data
  • Format: CSV / JSON per table, plus a manifest describing what's included
  • Excluded from export: other tenants' data (obvious), system fields (internal IDs, soft-delete flags), system tables
  • Delivered via signed URL with TTL (e.g., 24 hours)

Tenant migration / merging (rare but real):

  • Sometimes one account becomes another (acquisition, consolidation, customer change of legal entity)
  • Plan for this: write a migration tool that can copy data from account A to account B, then delete A
  • Test on staging; never run untested

Output:

  1. The provisioning transaction code with atomicity guarantees
  2. The deletion cascade order for my schema
  3. The export job code
  4. The suspension state-machine
  5. The migration utility framework

The most-skipped step: **testing the deletion path.** Most teams write soft-delete, never hard-delete, and discover at year 2 that hard-delete doesn't actually work because of foreign-key cascade issues that nobody resolved. Test it from day 1.

---

## 8. Document for Buyers and Auditors

Enterprise buyers expect documentation. Audit. Provide.

Generate the multi-tenancy documentation for security reviews. Lives at /trust/architecture (linked from Data Trust).

Sections:

1. Tenancy model

  • Pool model: shared database, shared schema, with logical isolation per account
  • Defense-in-depth: application-layer enforcement + Postgres RLS
  • Optional silo tier: dedicated database for enterprise contracts (mention only if you actually offer this)

2. Data isolation guarantees

  • Every customer-facing query is automatically scoped by the active account
  • The application uses a tenant-scoped database client that injects account_id filters
  • Postgres RLS provides defense-in-depth at the database level
  • Cross-account foreign keys are forbidden by schema design and enforced via constraints

3. Tested isolation

  • Per-route automated tests that verify no cross-account data leakage
  • Continuous CI gates that prevent service-role usage in customer-facing code
  • Quarterly red-team / pen-test exercises specifically targeting tenant isolation

4. Admin / support access

  • Customer support staff can view a customer account only via a clearly-marked admin path
  • Every admin action is recorded in the customer's audit log per Audit Logs
  • Customers can review every support-admin action that affected their account

5. Compliance mappings

  • SOC 2 CC6.1, CC6.6 (logical access controls)
  • ISO 27001 A.9.4 (access control)
  • HIPAA tenant-isolation patterns if applicable
  • GDPR tenant-data-export requirements

6. Incident response specific to tenant-isolation events

  • Per Incident Response and Status Page: if a tenant-isolation bug is ever discovered, our process is...
  • Notification timeline (within 72h of confirmed isolation breach)
  • Remediation timeline expectations
  • Customer-specific impact disclosure

Output the documentation in the same voice as the rest of /trust.


The single highest-leverage section: **compliance mappings.** Enterprise buyers map controls to evidence. If your multi-tenancy doc says "Application-layer enforcement + Postgres RLS satisfies SOC 2 CC6.6", the auditor ticks the box and moves on. Without it, they ask, you respond, time passes.

---

## What Done Looks Like

By end of week 2 of implementing multi-tenancy properly:
1. **Tenancy model decided** with documented rationale
2. **Schema includes account_id on every tenant-owned table** with proper indexes
3. **Application middleware enforcing tenant scoping** at the framework level
4. **Postgres RLS** as defense-in-depth on production tables
5. **Service-role isolation**: clearly marked, audit-logged, CI-gated
6. **Per-route isolation tests** running on every PR
7. **Tenant-lifecycle operations** built (provisioning, suspension, deletion, export)
8. **Documentation** linked from /trust

Within 90 days:
- First enterprise security review passes citing the tenant-isolation architecture
- Zero cross-tenant data-leak incidents in production logs
- A pen-test report (internal or external) confirming no isolation bypass found
- Compliance mappings drafted for the certifications you'll seek (SOC 2, ISO 27001)

Within 12 months:
- 1+ formal security review / SOC 2 audit completed citing the architecture
- Multi-tenancy is invisible day-to-day because the framework enforces it correctly
- Engineering velocity is uncompromised: new features inherit tenant scoping automatically

---

## Common Pitfalls

- **"We'll add tenant isolation later."** The retrofit is 10x more expensive than building it correctly from week 1.
- **Schema-per-tenant for "future flexibility".** Almost always wrong; operational cost of silo without the isolation benefit.
- **RLS without application-layer enforcement.** Half-measure; relies on RLS alone is dangerous if any application path bypasses.
- **Application-layer enforcement without RLS.** Half-measure the other direction; relies entirely on application code being perfect.
- **Service role used on customer-facing requests.** The single most consequential anti-pattern. CI-gate it.
- **Cross-account foreign keys.** A specific bug pattern that's invisible until it leaks data. Enforce same-account FKs at the schema level.
- **Forgetting to scope cache keys.** Cross-tenant cache poisoning is real and silent.
- **Forgetting to scope background jobs.** A "send all daily emails" job that doesn't iterate per-account can leak data via templated content.
- **Not testing deletion.** Untested hard-delete fails surprise-fashion when the first GDPR request lands.
- **Generic logging with no account_id tag.** Makes cross-tenant audit impossible.
- **Trusting users to send the right account_id.** Always derive from server-side session/JWT, never trust client-supplied IDs.

---

## Where Multi-Tenancy Plugs Into the Rest of the Stack

- [Data Trust](data-trust-chat.md) — multi-tenancy is one of the trust artifacts
- [Audit Logs](audit-logs-chat.md) — every event scoped per tenant; cross-tenant admin actions fully audited
- [Public API](public-api-chat.md) — API auth model maps API keys to a single account
- [Customer Support](customer-support-chat.md) — support-admin actions audited and customer-visible
- [Status Page](status-page-chat.md) — tenant-isolation bugs are highest-severity incidents
- [Incident Response](incident-response-chat.md) — cross-tenant data leaks have specific notification protocols
- [Database Providers](https://www.vibereference.com/backend-and-data/database-providers) — the database layer enabling RLS (Postgres-flavored is best for this)
- [Auth Providers](https://www.vibereference.com/auth-and-payments/auth-providers) — auth provides the user identity that resolves to tenant membership
- [PostHog Setup](posthog-setup-chat.md) — analytics scoped per tenant
- [Background Jobs Providers](https://www.vibereference.com/backend-and-data/background-jobs-providers) — jobs run with tenant context

---

## What's Next

Multi-tenancy is one of those topics that feels theoretical until your first enterprise prospect asks "show me your tenant isolation diagram" and you have to either provide one or lose the deal. The team that builds the model correctly in week 1 ships every subsequent feature inheriting tenant scoping for free. The team that defers it pays compounding interest — every new feature is a potential isolation bug, every refactor is a risk surface, and every enterprise prospect becomes a forensic-engineering project.

Build the discipline now. The schema decision, the middleware enforcement, the RLS policies, the CI gates — none of these are big projects in week 1. They're 6-month projects in month 18. Pay the small upfront cost; reap the recurring procurement-shortcut benefit for the life of the product.

---

[⬅️ Growth Overview](README.md)