Database Indexing Strategy: B-trees, Composites, Partials, and the Query Patterns That Actually Get Faster
If you're running a SaaS in 2026, the indexes you create now will dictate whether queries run in 2ms or 2 seconds at 10x the data. Most founders skip indexing until the first slow-query alert, then add an index on every column "just in case," and discover three months later that writes are slow, disk usage is 2x what it should be, and the query planner still picks the wrong index. Indexing is one of the highest-leverage operational decisions in your stack — invisible when right, catastrophic when wrong.
A working indexing strategy answers: which queries are hot, which columns need indexes, what shape (B-tree / GIN / partial / composite), and how to validate that the index is actually being used. Done well, the database stays calm under load and the team stops talking about query performance. Done badly, you're tuning indexes weekly while users complain about slow dashboards.
This guide is the implementation playbook for indexing that scales — the analysis to find what to index, the index types and when to use each, the gotchas (transaction lock, write amplification), and the review cadence that keeps indexes aligned with actual query patterns.
Indexes Don't Make Queries Fast — Index-Aligned Queries Do
Most indexing problems stem from a misunderstanding. Get the model right first.
Help me understand what indexes actually do.
The mental model:
**An index is a sorted secondary copy of selected data.**
- A B-tree index on `users(email)` keeps `email` values sorted, with pointers to the rows
- A query like `WHERE email = 'x@y.com'` can do binary search (O(log n)) instead of full scan (O(n))
- A query like `WHERE LOWER(email) = 'x@y.com'` CANNOT use the index (different expression)
**Indexes have costs**:
- **Storage**: typically 5-30% of the table size per index
- **Write amplification**: every INSERT / UPDATE / DELETE updates every relevant index
- **Lock contention**: index updates can block during heavy writes
- **Memory**: hot indexes live in shared_buffers; less memory for data
**The "index dilemma"**:
- More indexes → faster reads (potentially)
- More indexes → slower writes (definitely)
- More indexes → more storage (definitely)
- More indexes → harder maintenance (vacuum, rebuild, etc.)
You want the MINIMUM set of indexes that makes hot queries fast.
**The "index isn''t magic" fact**:
An index doesn''t help if:
- Query doesn''t match the index expression exactly
- Selectivity is too low (returning >5-10% of rows; full scan is faster)
- The index isn''t in cache (reading from disk; can be slower than seq scan)
- Statistics are stale (planner doesn''t know what to pick)
**The four index types worth knowing**:
| Type | Use For | Example |
|---|---|---|
| B-tree (default) | Equality + range | `id = X`, `created_at > Y` |
| Hash | Equality only (rarely used in PG) | `WHERE col = 'value'` |
| GIN | Full-text search, JSONB, arrays | `tags @> ARRAY['ai']` |
| BRIN | Very large tables, time-series | `created_at` on append-only data |
| GiST | Geospatial, full-text | PostGIS queries |
For Postgres, B-tree handles 80% of cases. GIN for JSONB / FTS / arrays. BRIN for time-series at scale.
For my system:
- Current index count
- Current slow-query top 10
- Current write throughput
Output:
1. The slow-query list
2. The index audit (what exists today)
3. The "indexes seem to be needed" hypothesis
The biggest unforced error: adding indexes "just in case" before measuring. A 50-column table with 47 indexes has terrible write performance and the planner still might pick a bad plan. Indexes are a response to known query patterns, not an insurance policy. Measure first; index second.
Find What to Index: The Slow-Query Audit
Don''t guess what needs indexing. Measure.
Help me run the slow-query audit.
The Postgres-native audit:
**Step 1: Enable query logging**
```sql
-- in postgresql.conf or via ALTER SYSTEM
log_min_duration_statement = 100 -- log queries >100ms
log_statement_stats = on
log_executor_stats = on
Plus pg_stat_statements extension (essential):
CREATE EXTENSION pg_stat_statements;
Step 2: Pull the top-N slowest / most-frequent queries
SELECT
query,
calls,
total_exec_time,
mean_exec_time,
rows
FROM pg_stat_statements
WHERE mean_exec_time > 50 -- average > 50ms
ORDER BY total_exec_time DESC
LIMIT 20;
This gives you the queries causing the most cumulative time.
Step 3: For each top query, run EXPLAIN ANALYZE
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM users WHERE tenant_id = $1 AND created_at > $2 ORDER BY created_at DESC LIMIT 10;
Look for:
Seq Scanon large tables = needs an indexSortoperation = consider an index that orders correctlyHash Joinon large tables = consider indexes on join columns- High
Buffers: shared read= data not in cache; index could fit it
Step 4: Categorize
| Pattern | Signal | Fix |
|---|---|---|
| Seq scan, low selectivity | Most rows match; full scan is OK | Don''t index |
| Seq scan, high selectivity | Filter selects few rows | Add index |
| Sort + LIMIT | Sorted ORDER BY | Composite index matching ORDER BY |
| Multi-condition WHERE | Several columns filtered together | Composite index |
| JOIN on large tables | FK lookup | Index on FK column |
| EXISTS subquery | Selectivity check | Index on subquery columns |
Step 5: Top-10 list
Output the top-10 queries-to-index with:
- Query pattern
- Current latency (ms)
- Suggested index
- Expected latency improvement
Tools beyond pg_stat_statements:
- pgAnalyze ($) — managed Postgres performance + index recommendations
- pgHero (free) — visual UI on top of pg_stat_statements
- Datadog Database Monitoring — APM-style query analysis
- Supabase / Neon / RDS Performance Insights — cloud-managed alternatives
For my database:
- Top-20 slow queries
- The pg_stat_statements output
- The categorization
Output:
- The slow-query top 10
- The fix categorization
- The proposed indexes
The biggest audit mistake: **acting on synthetic benchmarks instead of production traffic.** A query that''s slow in a 100-row dev DB might be fast in a 10M-row prod DB (or vice versa). Measure on real traffic. `pg_stat_statements` is built for this.
## Composite Indexes: The Order Matters
Most performance wins come from composite indexes. Get the column order right.
Help me design composite indexes.
The principle: column order in a composite index matters.
A composite index (a, b, c) works like a sorted directory:
- First sort by
a - Within same
a, sort byb - Within same
(a, b), sort byc
This means it can serve queries:
WHERE a = ?WHERE a = ? AND b = ?WHERE a = ? AND b = ? AND c = ?WHERE a = ? AND b > ?
But NOT:
WHERE b = ?(alone)WHERE c = ?(alone)WHERE a > ? AND b = ?(can''t use b after range on a)
The "leftmost prefix" rule:
The composite index can serve any query that uses a leftmost prefix of its columns.
(tenant_id, user_id, created_at) serves:
WHERE tenant_id = ?✓WHERE tenant_id = ? AND user_id = ?✓WHERE tenant_id = ? AND user_id = ? AND created_at > ?✓WHERE user_id = ?✗ (skipped tenant_id)
The column ordering algorithm:
For a query like:
SELECT * FROM activities
WHERE tenant_id = $1 AND status = 'active' AND created_at > $2
ORDER BY created_at DESC
LIMIT 50;
Order columns by:
-
Equality columns first (most selective equality first)
tenant_id(very selective; ~1% of rows per tenant)status(less selective; many rows per status)
-
Range / inequality columns next
created_at >is a range
-
ORDER BY columns last (in matching order)
created_at DESC
Result: (tenant_id, status, created_at DESC) or (tenant_id, status, created_at) (DESC handled by planner)
The "covering index" pattern:
Add columns to the index that you SELECT, so the query never touches the table:
-- Without covering: index lookup → table fetch
CREATE INDEX idx_activities_tenant_created ON activities(tenant_id, created_at DESC);
-- With covering: index returns all needed columns
CREATE INDEX idx_activities_tenant_created ON activities(tenant_id, created_at DESC)
INCLUDE (id, name, status);
Use INCLUDE for "Index-Only Scan" performance. Don''t over-include (index becomes huge).
Common composite patterns:
| Query Pattern | Index |
|---|---|
WHERE tenant_id = ? ORDER BY created_at DESC LIMIT N |
(tenant_id, created_at DESC) |
WHERE user_id = ? AND status = ? |
(user_id, status) |
WHERE tenant_id = ? AND email = ? |
(tenant_id, email) (with UNIQUE constraint) |
WHERE org_id = ? AND created_at BETWEEN ? AND ? |
(org_id, created_at) |
WHERE customer_id = ? AND status IN (...) |
(customer_id, status) |
Anti-patterns:
- Index on every column individually (Postgres can combine, but composite is usually faster)
- Composite index where leftmost column is rarely filtered
- Composite index too wide (adds storage; rarely full-prefix-matched)
For my queries:
- The top-5 multi-condition queries
- The proposed composite indexes
- The column-order justification per index
Output:
- The composite index proposals
- The leftmost-prefix verification per index
- The covering-include decisions
The biggest composite-index mistake: **wrong column order.** An index on `(status, tenant_id, created_at)` doesn''t help queries that filter by `tenant_id` (because most queries DON''T filter by status alone). The fix: lead with the most-frequently-filtered, most-selective column. Tenant ID is almost always first in multi-tenant SaaS.
## Specialty Indexes: When B-tree Isn''t Enough
For specific patterns, B-tree is wrong. Know when to switch.
Help me identify when to use specialty indexes.
GIN for JSONB / arrays / full-text:
-- JSONB containment queries
CREATE INDEX idx_events_payload ON events USING GIN (payload);
SELECT * FROM events WHERE payload @> '{"type": "click"}';
-- Array containment
CREATE INDEX idx_users_tags ON users USING GIN (tags);
SELECT * FROM users WHERE tags @> ARRAY['premium'];
-- Full-text search
CREATE INDEX idx_articles_fts ON articles USING GIN (to_tsvector('english', body));
SELECT * FROM articles WHERE to_tsvector('english', body) @@ to_tsquery('postgres');
GIN is slower to write (3-10x), faster to read for these patterns.
BRIN for very large append-only tables:
CREATE INDEX idx_events_created_at_brin ON events USING BRIN (created_at);
- Tiny (1-10MB for 1B rows vs 30GB for B-tree)
- Works on ranges; less precise than B-tree
- Use when:
- Data is append-only and naturally sorted
- Table is huge (100M+ rows)
- Range queries dominate
GiST for geospatial:
CREATE INDEX idx_locations_geom ON locations USING GIST (geom);
SELECT * FROM locations WHERE ST_DWithin(geom, $1, 1000);
PostGIS uses GiST for geospatial. Standard for geo queries.
Hash (rarely useful in PG, but exists):
CREATE INDEX idx_users_email_hash ON users USING HASH (email);
Only equality; no range. B-tree usually wins. Skip unless you have a specific reason.
Partial indexes (the underused power tool):
Indexes only specific rows:
-- Only active subscriptions (skip cancelled millions)
CREATE INDEX idx_subscriptions_active ON subscriptions(tenant_id, plan_id)
WHERE status = 'active';
-- Only unread notifications
CREATE INDEX idx_notifications_unread ON notifications(user_id, created_at DESC)
WHERE read_at IS NULL;
When most rows match a constant filter, partial index dramatically reduces size.
Use when:
- Most rows have a status that''s rarely queried (
cancelled,archived,deleted) - You always filter by that status anyway
- Index size matters
Expression indexes:
For queries with expressions:
-- Case-insensitive email lookup
CREATE INDEX idx_users_email_lower ON users (LOWER(email));
SELECT * FROM users WHERE LOWER(email) = 'x@y.com'; -- uses index
-- Date truncation
CREATE INDEX idx_events_day ON events (DATE_TRUNC('day', created_at));
SELECT * FROM events WHERE DATE_TRUNC('day', created_at) = '2026-04-30';
The query expression must EXACTLY match the index expression.
Unique indexes:
-- Standard unique index
CREATE UNIQUE INDEX idx_users_email ON users(email);
-- Tenant-scoped unique
CREATE UNIQUE INDEX idx_users_tenant_email ON users(tenant_id, email);
-- Conditional unique (deleted rows can repeat)
CREATE UNIQUE INDEX idx_users_email_active ON users(email)
WHERE deleted_at IS NULL;
For my system:
- JSONB queries that need GIN
- Append-only tables that fit BRIN
- Filters that fit partial indexes
- Expressions that need expression indexes
Output:
- The specialty-index opportunities
- The build order (most-impactful first)
- The size-vs-speed tradeoff per index
The biggest specialty-index mistake: **using B-tree on JSONB columns and wondering why containment queries are slow.** `WHERE payload @> '...'` can''t use a B-tree on `payload`. Switch to GIN. Same for full-text — never use LIKE / ILIKE on big tables; use `tsvector` + GIN.
## The Tenant-ID-First Rule for Multi-Tenant SaaS
For multi-tenant SaaS, the indexing rule is non-negotiable. Per [multi-tenancy-chat](multi-tenancy-chat.md).
Help me apply tenant-isolation in indexes.
The rule: for multi-tenant tables, tenant_id is ALWAYS the first column in composite indexes.
Why:
- Every query must filter by tenant_id (security)
- Tenant_id is highly selective (each tenant sees ~1% or less of rows)
- Index can scan only the tenant''s slice efficiently
Examples:
-- Right
CREATE INDEX idx_orders_tenant_status ON orders(tenant_id, status, created_at DESC);
-- Wrong (tenant_id not first; query has to scan more)
CREATE INDEX idx_orders_status_tenant ON orders(status, tenant_id, created_at DESC);
The "no-tenant-id-no-index-use" rule:
Reject queries in code review that don''t include tenant_id in the WHERE clause:
// Wrong — full table scan; security risk
const orders = await db.orders.findMany({ where: { status: 'open' } });
// Right
const orders = await db.orders.findMany({
where: { tenant_id: ctx.tenantId, status: 'open' }
});
This is both an index-performance issue AND a security concern (cross-tenant data leak).
ORM / query-builder middleware:
Build it into the data layer:
// Drizzle / Prisma middleware example
function tenantScoped(query, tenantId) {
return query.where(eq(table.tenant_id, tenantId));
}
Force tenant-scoping; never bypass.
The "shared tables" exception:
Some tables aren''t per-tenant:
users(global identity)regions(lookup)feature_flags(global)
These don''t need tenant_id; index by their natural key.
For my system:
- Audit: which tables have tenant_id?
- Audit: which indexes start with tenant_id?
- Fix: any tenant-scoped tables with leading non-tenant column?
Output:
- The tenant-table audit
- The fix list (indexes to drop / rebuild)
- The data-layer enforcement plan
The biggest tenant-indexing mistake: **forgetting tenant_id in the index leading position.** A "users(email)" unique index in a multi-tenant SaaS forces all tenants to share an email namespace — Acme can''t have a user `bob@example.com` if Beta already does. Fix: `(tenant_id, email)` UNIQUE. Fixes both performance and tenant isolation.
## Build Indexes Without Locking the Table
Adding an index in production must not lock writes. Get this right.
Help me ship indexes safely.
The Postgres-specific concern:
Default CREATE INDEX LOCKS the table for the duration of the build.
For a 100M-row table, this can lock writes for 30-60 minutes. Bad for production.
The fix: CREATE INDEX CONCURRENTLY:
CREATE INDEX CONCURRENTLY idx_orders_tenant_status
ON orders(tenant_id, status, created_at DESC);
- Doesn''t lock writes
- Slower (2-3x slower than blocking)
- Can fail mid-build (must
DROP INDEX CONCURRENTLYand retry) - Cannot be inside a transaction
The migration pattern:
-- Migration file (Drizzle / Prisma migration)
-- DON''T wrap in BEGIN/COMMIT
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_orders_tenant_status
ON orders(tenant_id, status, created_at DESC);
Verifying the build:
SELECT indexname, indexdef, valid
FROM pg_indexes JOIN pg_class ON pg_class.relname = indexname
WHERE indexname = 'idx_orders_tenant_status';
-- Or specifically check pg_index.indisvalid:
SELECT i.indisvalid, c.relname
FROM pg_index i JOIN pg_class c ON c.oid = i.indexrelid
WHERE c.relname = 'idx_orders_tenant_status';
If indisvalid is false: build failed; need to drop + rebuild.
Dropping safely:
DROP INDEX CONCURRENTLY IF EXISTS idx_orders_tenant_status_old;
Same rules: no transaction; doesn''t lock.
Migration tools:
- Drizzle — supports
CONCURRENTLYvia raw SQL - Prisma — has options; check docs for current state
- Atlas / Sqitch / Liquibase — explicit support
- Manual — for production-critical indexes, sometimes manual SQL is safer than letting the migration tool do it
Disk-space sanity check:
Before building a large index:
SELECT pg_size_pretty(pg_relation_size('orders'));
Index will likely be 5-30% of that size. Make sure you have disk headroom.
The "test in staging first" rule:
For any index on >10M-row table:
- Test in staging with production-like data
- Time the build
- Schedule production build during low-traffic window
- Have rollback plan
For my deployments:
- The CONCURRENTLY conventions
- The migration tool''s support
- The staging-first plan
Output:
- The migration patterns
- The verification queries
- The rollback plan
The biggest deployment mistake: **plain `CREATE INDEX` in a production migration on a large table.** Site goes down for 30 minutes; SLA breached; on-call scrambles. Always `CONCURRENTLY` for production indexes on tables over 100K rows. Test in staging; verify size impact; deploy during low-traffic.
## Monitor Index Usage and Drop the Dead Ones
Indexes accumulate. Audit regularly; drop the unused.
Help me find and drop unused indexes.
The diagnostic queries:
1. Indexes never used:
SELECT
schemaname,
tablename,
indexname,
idx_scan,
idx_tup_read,
pg_size_pretty(pg_relation_size(indexrelid)) AS size
FROM pg_stat_user_indexes
JOIN pg_index USING (indexrelid)
WHERE idx_scan = 0
AND NOT indisunique
AND NOT indisprimary
ORDER BY pg_relation_size(indexrelid) DESC;
This shows indexes with ZERO scans since stats reset. Strong drop candidates.
But: don''t drop unique / primary indexes (they enforce constraints). Don''t drop indexes used during failover / occasional reports.
2. Duplicate indexes:
SELECT
pg_size_pretty(SUM(pg_relation_size(idx))::BIGINT) AS size,
array_agg(idx::regclass)
FROM (
SELECT
indexrelid AS idx,
(indrelid::TEXT || E''\n'' || indclass::TEXT || E''\n'' || indkey::TEXT) AS key
FROM pg_index
) sub
GROUP BY key
HAVING COUNT(*) > 1
ORDER BY SUM(pg_relation_size(idx)) DESC;
Duplicates are pure waste. Drop one of each pair.
3. Bloated indexes:
Indexes can become bloated from heavy update churn:
-- Use pgstattuple extension
CREATE EXTENSION pgstattuple;
SELECT * FROM pgstattuple_index('idx_orders_tenant_status');
Bloat > 50%: REINDEX CONCURRENTLY idx_name.
4. Index hit ratio:
SELECT
sum(idx_blks_hit) / nullif(sum(idx_blks_hit + idx_blks_read), 0) AS index_hit_ratio
FROM pg_statio_user_indexes;
Goal: > 99%. Below 90% = indexes don''t fit in shared_buffers (need more memory or fewer indexes).
The pragmatic drop process:
- Identify candidate (zero scans, large)
- Verify by checking pg_stat_statements for queries that COULD use it
- Mark it for drop in a deferred migration
- Wait 1-2 weeks (in case stats were just reset)
- Drop with
DROP INDEX CONCURRENTLY
Pre-drop safety check:
Before dropping, run:
-- Disable index temporarily; if performance degrades, re-enable
UPDATE pg_index SET indisvalid = false WHERE indexrelid = 'idx_to_test'::regclass;
-- Watch query performance for 24-48 hours
-- If fine, drop the index
DROP INDEX CONCURRENTLY idx_to_test;
-- If problems, re-enable
UPDATE pg_index SET indisvalid = true WHERE indexrelid = 'idx_to_test'::regclass;
Quarterly cadence:
- Run the unused-index query
- Identify drops
- Schedule drops during low-traffic
- Document why each was dropped
For my database:
- Current unused-index list
- Current duplicate-index list
- Current index hit ratio
Output:
- The unused-index report
- The drop plan
- The index hit-ratio analysis
The biggest unused-index mistake: **never auditing.** Indexes accumulate over years; old features leave behind their indexes. A typical mature SaaS has 30-50% indexes that haven''t been used in 6 months. Dropping them frees disk + speeds up writes + reduces noise. Quarterly index-cleanup is cheap and high-value.
## Avoid Common Pitfalls
Recognizable failure patterns.
The indexing mistake checklist.
Mistake 1: Index everything "just in case"
- Slows writes; wastes disk
- Fix: index based on measured slow queries
Mistake 2: Wrong column order in composite
- Leftmost-prefix unused
- Fix: most-selective + most-frequently-filtered first
Mistake 3: B-tree on JSONB / arrays
- Containment queries can''t use it
- Fix: GIN
Mistake 4: Plain CREATE INDEX in production
- Locks table for minutes / hours
- Fix: CREATE INDEX CONCURRENTLY
Mistake 5: Tenant_id not leading column
- Index isn''t selective enough
- Fix: tenant_id always first in multi-tenant tables
Mistake 6: Missing index on FK columns
- JOINs are slow; cascade deletes lock
- Fix: index every FK column
Mistake 7: Index used by query but planner doesn''t pick it
- Stale statistics
- Fix: ANALYZE table_name; check planner cost
Mistake 8: Function on indexed column in WHERE
WHERE LOWER(email) = ?doesn''t use email index- Fix: expression index OR change query
Mistake 9: Over-indexed write-heavy tables
- Inserts are slow
- Fix: drop unused; consider partial indexes
Mistake 10: Never auditing
- Bloat + dead indexes accumulate
- Fix: quarterly review
The quality checklist:
- Top 10 slow queries have appropriate indexes
- All composite indexes lead with tenant_id (multi-tenant tables)
- All FK columns indexed
- JSONB / FTS / array columns use GIN
- Append-only large tables use BRIN
- Conditional indexes use partial
- Production indexes built CONCURRENTLY
- Quarterly unused-index audit
- Quarterly bloat audit
- Index hit ratio > 99%
For my database:
- Audit against checklist
- Top 3 fixes
Output:
- Audit results
- Top 3 fixes prioritized
- The "v2 indexing" plan
The single most-common mistake: **adding indexes reactively without measuring.** A slow query gets reported; a panicked engineer adds an index without checking if it''s used; performance doesn''t improve; a second index gets added; six indexes later, the query is still slow because the planner''s ignoring all of them. Fix: ALWAYS `EXPLAIN ANALYZE` before AND after adding an index. Verify the index is used. Verify performance improved. If not, drop the index.
---
## What "Done" Looks Like
A working indexing strategy in 2026 has:
- Indexes derived from measured slow queries (`pg_stat_statements`)
- Composite indexes with tenant_id leading (multi-tenant tables)
- All FK columns indexed
- JSONB / FTS / array columns indexed with GIN
- Production indexes built `CONCURRENTLY`
- Quarterly audit for unused indexes
- Quarterly check on index bloat
- Index hit ratio monitored (>99% goal)
- Indexes documented (which queries each serves)
The hidden cost of weak indexing: **invisible slowness.** Queries that ran in 50ms at 10K rows now run in 5 seconds at 10M rows; users complain about "the dashboard is slow today" without realizing it''s been getting slower for months. The cost shows up as customer support tickets, churn, and engineering time spent debugging "why is the database CPU so high?" Indexing isn''t glamorous; it''s the difference between a SaaS that scales and one that doesn''t.
## See Also
- [Database Migrations](database-migrations-chat.md) — schema changes affect indexing
- [Multi-Tenancy](multi-tenancy-chat.md) — tenant-isolation principles
- [Caching Strategies](caching-strategies-chat.md) — caching reduces DB load
- [Performance Optimization](performance-optimization-chat.md) — broader perf context
- [Backups & Disaster Recovery](backups-disaster-recovery-chat.md) — index rebuild post-restore
- [Search](search-chat.md) — search relies on indexes
- [Real-Time Collaboration](real-time-collaboration-chat.md) — query patterns
- [Service Level Agreements](service-level-agreements-chat.md) — query performance affects SLA
- [Audit Logs](audit-logs-chat.md) — common query target
- [VibeReference: Database Providers](https://www.vibereference.com/backend-and-data/database-providers) — Postgres / MySQL / etc.
- [VibeReference: Postgres](https://www.vibereference.com/backend-and-data/postgres) — Postgres-specific
- [VibeReference: SQL](https://www.vibereference.com/backend-and-data/sql) — SQL fundamentals
- [VibeReference: BI Analytics Tools](https://www.vibereference.com/devops-and-tools/bi-analytics-tools) — BI queries need indexes too
- [VibeReference: Error Monitoring Providers](https://www.vibereference.com/devops-and-tools/error-monitoring-providers) — slow-query alerts
[⬅️ Day 6: Grow Overview](README.md)