Logging Strategy & Structured Logs: What to Log, How to Format It, and How Not to Pay $40K/mo for Logs Nobody Reads
If you're running a SaaS in 2026, the logging decisions you make in year 1 determine whether you can debug production in year 3. Most founders default to console.log("got here") peppered through code, then panic-add Datadog after the first production incident, then watch the bill spike to $5K/mo because everything is being shipped to expensive ingestion. Logs are the line item where founders both under-invest (no signal when something breaks) AND over-invest (paying for noise that nobody reads).
A working logging strategy answers: what events are worth logging, what shape should they take, where do they go, and how do we keep the cost rational at scale. Done well, logs are the first artifact you reach for during incidents, the source of truth for "what actually happened," and the cheap insurance for nights you don't have to spend at the keyboard. Done badly, logs are an expensive blob that nobody searches because the signal is buried under three orders of magnitude of noise.
This guide is the implementation playbook for shipping a logging strategy that scales — what to log (and what not to), structured format with stable schemas, log levels that mean something, sampling at scale, and the cost discipline that prevents the $40K/mo log surprise.
Logs vs. Metrics vs. Traces — Pick the Right Tool
Most "log everything" decisions stem from confusing logs with metrics. Get the categories straight.
Help me distinguish logs from metrics from traces.
The three observability primitives:
**1. Logs**
- Discrete events with rich context
- "User X did Y at time Z because reason W"
- High cardinality (every event is unique)
- Searchable / queryable text
- Best for: incident debugging, auditing, forensics
Cost: high (per-event ingestion + storage + query)
**2. Metrics**
- Aggregated numbers over time
- "API p95 latency over last 5 minutes"
- Low cardinality (predefined dimensions)
- Math-friendly (sum, avg, percentile)
- Best for: dashboards, alerts, capacity planning
Cost: low (pre-aggregated; tiny storage)
**3. Traces**
- Causal chains across services
- "Request A → DB query B → API call C → response"
- Per-request distributed
- Tied to spans / context
- Best for: root-cause analysis in distributed systems
Cost: medium (sampling necessary at scale)
**The "right tool" rule**:
If you''re asking:
- "Did this specific request succeed?" → Log
- "What''s the average latency this hour?" → Metric
- "Where in the call chain did this slow down?" → Trace
Don''t use logs as metrics. "Count the number of log lines matching X" is a metric query that''s 100x more expensive when you''re paying per log byte.
**The OpenTelemetry framing**:
OpenTelemetry standardized these three. In 2026, modern observability stacks emit OTel-formatted logs / metrics / traces, and a single backend (Datadog / New Relic / Grafana Cloud / Honeycomb / Vercel Observability) ingests all three.
**The 80/20 reality**:
For most indie SaaS:
- 80% of debugging happens in logs
- 15% in metrics dashboards
- 5% in traces
Until you have multi-service architecture, logs + metrics is sufficient.
For my system:
- Current stack
- Logs / metrics / traces presence
- Where each is used today
Output:
1. The current observability inventory
2. The "where each fits" map
3. The gaps to fill
The biggest unforced error: using logs as metrics. Logging "request_received" on every API call to count requests-per-second produces gigabytes of logs and a $5K/mo bill. The fix: emit a metric (api_requests_total{endpoint=...}) — counted in metrics backend at near-zero cost. Reserve logs for events where you need the rich context, not for counting.
What to Log — The Inclusion / Exclusion List
Most "log everything" instincts fail at scale. Be deliberate.
Help me decide what to log.
The "always log" list:
**1. State transitions**
- User signed up
- Subscription created / cancelled / paused
- Plan upgraded / downgraded
- Permission changed
- Critical resource created / deleted
These are auditable + infrequent + high-context.
**2. External boundary events**
- Outgoing HTTP request (URL, status, duration)
- Webhook received (source, event type)
- Email sent (recipient, template, status)
- Payment processed (amount, status)
- AI / LLM call (model, tokens, latency, cost)
External boundaries are where things go wrong; you must have records.
**3. Errors**
- Caught exceptions with full stack trace
- Failed API calls (with context)
- Validation errors (only when meaningful)
- Database errors (with query and parameters redacted)
**4. Security events**
- Login (success + failure)
- Authorization denials
- Password reset / 2FA changes
- Suspicious activity (rate-limit hits, etc.)
Per [audit-logs-chat](audit-logs-chat.md): security logs often need different retention.
**5. Business-critical actions**
- Customer-facing email sends
- Refunds issued
- Permissions / role changes
- Data exports
These have audit / compliance value beyond debugging.
**The "never log" list**:
**1. PII / sensitive data**
- Plaintext passwords (never)
- Credit-card numbers
- SSNs / tax IDs
- Private message contents
- Health data (HIPAA)
If you have to log a sensitive field, mask it: `email=u***@example.com`.
**2. High-cardinality successful operations**
- "GET /api/users/123 → 200 OK" (every request; every user)
- Use metrics; not logs
**3. Internal heartbeats / pings**
- Health checks
- Cron-job "I''m alive" messages
- Most successful background jobs (only failures)
**4. Verbose debug-only chatter**
- "Entering function X"
- "Variable Y has value Z"
- These belong in DEBUG level (off in production)
**The "would I read this in 6 months?" test**:
For each log statement, ask:
- Is this useful in an incident?
- Is this auditable?
- Could someone find a bug from this?
If no to all three: don''t log.
**The 100x cardinality rule**:
Cardinality is the number of unique values for a field.
- `user_id`: high cardinality (millions of unique values) — fine in logs; bad in metrics
- `endpoint`: low cardinality (50 endpoints) — fine in metrics
- `request_id`: highest cardinality (every request unique) — only in logs
When you log a high-cardinality field on every request, log volume explodes. Sample (see below) or aggregate.
For my codebase:
- Audit current log statements
- The "remove these" list
- The "add these" list
Output:
1. The keep / kill log audit
2. The redaction policy
3. The cardinality analysis
The biggest log-noise mistake: logging every successful request. A successfully fetched user 123 log on every authenticated request creates massive volume. Successes don''t need logging; emit a metric. Logs are for events where the rich context is the whole point — anomalies, state changes, errors, audit-trail-critical actions.
Structured Logs Are Non-Negotiable
Plain-text logs ("User 123 signed up at 10:23") waste future-you''s time. Structure everything.
Help me adopt structured logging.
The format: JSON, always.
```typescript
// Bad
logger.info(`User ${userId} signed up via ${source}`);
// Good
logger.info('user.signup', {
user_id: userId,
source: source,
tenant_id: tenantId,
ip: request.ip,
user_agent: request.headers['user-agent'],
});
The good version is:
- Searchable:
WHERE event = "user.signup" AND tenant_id = "abc" - Aggregatable:
GROUP BY source - Filterable: any field is a query dimension
- Future-proof: new fields don''t break parsers
Structured-log conventions:
Every log entry should have:
| Field | Purpose | Example |
|---|---|---|
timestamp |
When | 2026-04-30T10:23:45.123Z |
level |
Severity | INFO, WARN, ERROR |
event |
What happened | user.signup |
request_id |
Tie to request | UUID |
tenant_id |
Tie to tenant | UUID |
user_id |
Tie to user | UUID |
service |
Which service | api, worker |
version |
Code version | git sha |
message |
Human-readable | User signed up via Google |
| (event-specific fields) | Context | source, email_domain |
Naming conventions:
- Event names:
noun.verbform (user.signup,subscription.cancelled,payment.refunded) - Field names:
snake_case - Boolean fields:
is_orhas_prefix (is_paid,has_2fa) - Time fields: explicit suffix (
created_at,expires_at)
The "nothing in the message string" rule:
DON''T:
logger.info(`Processed order ${orderId} for ${customerEmail} totaling $${amount}`);
This forces text-parsing later. Instead:
logger.info('order.processed', {
order_id: orderId,
customer_email: customerEmail,
amount_cents: amountCents,
currency: 'USD',
});
The message is a human-readable summary; the FIELDS are queryable.
Choosing a logger:
| Logger | Language | Notes |
|---|---|---|
| Pino | Node | Fast; JSON-native; recommended |
| Winston | Node | More features; heavier |
| Bunyan | Node | Older; JSON-native |
| Structlog | Python | Standard for structured logs |
| Logrus / Zap / Slog | Go | Slog is std-lib in Go 1.21+ |
| Tracing | Rust | OpenTelemetry-compatible |
| ConsoleLogger | Browser | Plus event-tracking |
For Node.js / TypeScript: Pino is the default for a reason — fast, structured, simple.
Trace / request correlation:
Every log within a request should share a request_id:
// Middleware sets request_id
app.use((req, res, next) => {
req.requestId = req.headers['x-request-id'] || uuid();
res.setHeader('x-request-id', req.requestId);
next();
});
// Pino with bindings
const log = logger.child({
request_id: req.requestId,
tenant_id: req.tenantId,
user_id: req.userId,
});
// All logs from this request now have the IDs
log.info('order.created', { order_id: orderId });
This lets you reconstruct a full request''s journey from a single ID.
For my codebase:
- Current logging library
- Migration to structured plan
- Schema convention
Output:
- The structured-log schema
- The logger choice
- The request-correlation pattern
The biggest structured-logging mistake: **partial structuring.** Some logs structured; some plain text; queries can''t span both. The fix: 100% structured from the cutover; old plain logs eventually expire. Don''t leave a mixed corpus.
## Log Levels That Mean Something
Log levels exist for a reason. Use them.
Help me use log levels correctly.
The standard levels (low to high severity):
TRACE / DEBUG
- Verbose, internal-state-level
- Only enabled in development or for specific debugging
- Examples: "entered function X", "variable Y has value Z"
- Production: usually OFF
INFO
- Normal operations worth noting
- State changes, business events
- Examples: "user.signup", "subscription.created"
- Production: ON
WARN
- Something is wrong but not failing
- Recoverable; system continues
- Examples: "rate.limit.hit", "deprecated.endpoint.called"
- Production: ON; review weekly
ERROR
- Something failed; user impact possible
- Caught exceptions, failed external calls
- Examples: "payment.failed", "db.query.failed"
- Production: ON; alert on patterns
FATAL
- System can''t continue
- Process about to crash / restart
- Examples: "config.missing", "db.connection.permanently.lost"
- Production: ON; immediate alert
The level filtering rule:
In production: log INFO and above; DEBUG/TRACE off.
Override for specific debugging:
- Set DEBUG for one user:
if (user.email == 'debug-user@...') log.level = 'debug' - Or feature-flag-controlled per feature-flag-providers
The "ERROR is not for warnings" rule:
If something happens that should NOT trigger a page / alert, it''s WARN.
Common confusion:
- "User entered wrong password" — INFO or DEBUG (not WARN; expected behavior)
- "Rate limit hit" — WARN (anomalous but not actionable)
- "Database connection failed" — ERROR (actionable)
- "Payment processor returned 500" — ERROR
The "log + throw" anti-pattern:
DON''T:
try {
await db.query(...);
} catch (e) {
logger.error('db query failed', { error: e });
throw e;
}
This double-logs (caller will also log + throw). Result: 5x the log volume for one error.
DO:
try {
await db.query(...);
} catch (e) {
// Add context; rethrow
throw new DatabaseError('user.lookup', e);
}
Log only at the boundary that handles the error finally (HTTP middleware, top-level handler, etc.).
The level-budget rule:
Roughly:
- 95% of log volume should be INFO
- 4% WARN
- <1% ERROR
- <0.1% FATAL
If you have 50% ERROR logs: most are not really errors. Re-categorize.
For my code:
- Current level distribution
- Mis-categorized logs
- Filter strategy
Output:
- The level audit
- The re-categorization plan
- The production filter
The biggest log-level mistake: **logging everything as ERROR because "errors are important."** The result: noise floor is so high that real errors disappear. Use levels meaningfully; reserve ERROR for actionable failures; alerts fire only on real ERRORs.
## Sampling at Scale
At a certain point, logging every event isn''t feasible. Sample wisely.
Help me design log sampling.
The reality:
At 10K req/sec, logging every request fills 1TB/day. Cost: $30K-50K/mo on most platforms. Most logs go unread.
The sampling strategies:
1. Head-based sampling
Decide at the start whether to log:
// Sample 10% of requests
if (Math.random() < 0.1) {
log.info('api.request', { endpoint, ... });
}
Pros: simple Cons: random; might miss important events
2. Selective sampling (best practice)
Always log certain events; sample the rest:
if (
isError(response) ||
isHighValue(user) ||
isUnusualEndpoint(endpoint) ||
Math.random() < 0.01 // 1% of normal traffic
) {
log.info('api.request', { ... });
}
Always log:
- Errors / 5xx responses
- Auth failures
- High-value customers
- Unusual endpoints / patterns
Sample the rest at 1-10%.
3. Tail-based sampling (for traces)
Decide AFTER request completes, based on outcome:
- Slow requests (>p99): keep all
- Error requests: keep all
- Normal: 1% sample
This requires buffering; usually traces, not logs.
4. Adaptive sampling
Adjust sampling rate based on volume:
- Low volume: log everything
- Medium volume: sample 10%
- High volume: sample 1%
Tools (Datadog, Grafana Cloud) often do this automatically.
5. Per-tenant sampling
In multi-tenant SaaS, sample by tenant:
- High-paying tenants: log 100%
- Free tier: log 5%
- Specific debug-targeted tenant: log 100% temporarily
The "never sample errors" rule:
Errors are rare and high-value. NEVER sample errors. Always log all errors.
Sample:
- Successful operations
- Health checks
- Routine background jobs
Don''t sample:
- Errors
- Security events
- Audit-required actions
- Payment events
Cost projection:
For a typical SaaS:
- 1000 req/sec average
- 200 bytes per log entry
- = 200KB/sec = 17GB/day = $X/mo
Sampling at 10%:
- 1.7GB/day
- 90% cost reduction
For my system:
- Current log volume + cost
- Sampling strategy
- Always-log allowlist
Output:
- The sampling rules
- The cost projection
- The "never sample" list
The biggest sampling mistake: **applying sampling to errors.** "We sample 1% of all logs" → 99% of errors are dropped → on-call can''t debug. Always preserve errors; sample only high-volume routine events.
## Where Logs Go — Backend Choices
The destination matters for cost, query speed, and retention. Pick deliberately.
Help me pick a logs backend.
The options:
1. Cloud-native (Vercel / AWS CloudWatch / GCP Cloud Logging)
Pros: bundled with platform; zero-setup Cons: query DX often poor; export hard; pricing climbs
For Vercel: built-in logs are good for development; consider exporting to dedicated log backend at scale.
2. Datadog Logs
Pros: industry leader; rich querying Cons: expensive ($1.27/GB ingestion + retention) Pricing: scales fast; $1K-10K/mo typical for indie SaaS
3. New Relic Logs
Similar to Datadog; competitive.
4. Grafana Cloud Logs / Loki
Pros: cheaper than Datadog; OSS Loki option Cons: less polished than Datadog
5. Honeycomb
Pros: best for tracing; logs as events Cons: different mental model than traditional logs
6. Better Stack (formerly Logtail)
Pros: indie-friendly; reasonable pricing Cons: smaller community
7. Axiom
Pros: indie-friendly; cheap; modern UX Cons: newer
8. Self-hosted ELK (Elasticsearch + Logstash + Kibana) / OpenSearch
Pros: control; potentially cheaper at huge scale Cons: significant DevOps overhead; you operate it
9. ClickHouse / Apache Iceberg
Pros: cheap at extreme scale; SQL queries Cons: more setup; not log-specialized
10. S3 + Athena (cold archive + query)
Pros: nearly free for archive; queryable when needed Cons: query latency; not for live debugging
The pragmatic stack patterns:
Indie SaaS, < $1M ARR:
- Vercel logs / CloudWatch (development + light prod)
- Or: Better Stack / Axiom for searchability
- Cost: $0-200/mo
Growth-stage:
- Datadog or Grafana Cloud
- Sample aggressively
- Cost: $500-3K/mo
Mid-market:
- Datadog / New Relic full stack
- Plus archive to S3
- Cost: $3-15K/mo
Cost-sensitive at scale:
- Hot logs: Axiom / Better Stack
- Cold archive: S3
- Specific service: ClickHouse for high-volume
- Cost: 30-50% of Datadog equivalent
Retention strategy:
- Hot (queryable, fast): 7-30 days
- Warm (queryable, slower): 30-90 days
- Cold (archive, expensive to query): 90 days - 7 years
Compliance often mandates 1-7 years for security / audit logs (per audit-logs-chat).
For my system:
- Current backend
- Volume + cost
- Retention requirements
Output:
- The backend choice
- The retention tiers
- The cost projection
The biggest backend mistake: **shipping everything to expensive ingestion at default settings.** Default Datadog config + 1TB/day = $30K/mo. The fix: sample aggressively before shipping; route different log types to different backends; archive cold data to S3. The bill is fixable with engineering effort.
## Querying Logs — Make Them Findable
Logs you can''t query are useless. Set up for discovery.
Help me make logs queryable.
The capabilities a logs backend should have:
1. Field-based filtering
level=ERROR AND tenant_id=abc AND service=api
If your tool requires text-search regex, you''re missing structure.
2. Time-range filtering
Default last hour; selectable to last week / month.
3. Aggregations
COUNT(*) BY event over a time range — see top events.
4. Live tail
Stream new logs in real-time during incidents.
5. Saved queries
Common queries (errors per endpoint; tenant-specific) saved as one-clicks.
6. Dashboards
Log-based dashboards for: error rate, by-endpoint volume, slow-query frequency.
7. Alerts
When log pattern matches threshold, page someone:
- "5+ ERROR logs from payments service in 1 minute"
- "auth.login.failed > 100/min for one tenant" (brute-force)
The "common queries" cheatsheet:
For incident debugging:
- All logs for request_id X
- All errors in last 30 min
- All logs for tenant Y in last hour
- Slow operations (duration > 5s) in last hour
For quarterly review:
- Top 10 events by volume
- ERROR rate trend (week-over-week)
- New error types that appeared this month
The "log audit" exercise:
Quarterly:
- Top 20 events by volume — are they all valuable?
- Top 10 ERROR events — are they actionable?
- Events with zero queries — drop?
Logs that are emitted but never queried are pure cost. Audit and trim.
For my system:
- Common queries used during incidents
- Saved queries / dashboards
- Alert configurations
Output:
- The query catalog
- The dashboard list
- The alert rules
The biggest queryability mistake: **logs that can''t be filtered by tenant.** During an incident, customer X reports a bug; you need their logs; without a `tenant_id` field, you''re grepping. Always include tenant_id and request_id; query by them constantly.
## Cost Discipline — Don''t Wake Up to a $40K Bill
Logging cost is one of the most common surprise expenses. Build the discipline.
Help me control logging cost.
The cost levers:
1. Volume reduction
- Audit and remove low-value logs
- Sample high-volume routine logs
- Use metrics instead of logs for counters
2. Field reduction
- Don''t log entire request bodies (sample fields)
- Don''t log deep object trees (extract key fields)
- Compress / truncate large strings
3. Routing
- Errors → expensive backend (full retention)
- Routine info → cheaper backend
- Audit logs → dedicated retention-mandate backend
4. Sampling at source
Apply sampling BEFORE shipping (saves ingestion cost):
// In code, not in backend filtering
if (shouldSample(event)) {
logger.info(event, fields);
}
vs. shipping everything and filtering in backend (still pay ingestion).
5. Aggregation pre-shipping
For high-cardinality counters, aggregate locally first:
// Instead of one log per request
metricsClient.increment('api.requests', { endpoint });
// Sample 1% to keep examples
if (Math.random() < 0.01) {
logger.info('api.request', { ... });
}
6. Retention tiering
- Hot (7-30 days): expensive but fast
- Warm (30-90 days): cheaper, slower
- Cold (90+ days): archive (S3); queryable when needed
7. Tenant-based budgeting
- Track log volume per tenant
- Identify "log-noisy" tenants (often free-tier abuse)
- Throttle / sample noisy tenants harder
8. Quarterly cost review
Per quarter:
- Total log volume + cost
- Top events by volume
- Top events by cost
- Are they all worth it?
The "10x signal" rule:
Compare:
- Cost of logs ingested per month
- Value of incidents debugged using logs
If cost > 10x value: too much logging. If cost < value: probably not enough.
Most indie SaaS: Goldilocks zone is $200-2K/mo.
Common cost surprises:
- Forgot to sample after a feature launch (volume 10x overnight)
- New service emits verbose DEBUG in prod
- Customer-facing feature loops logging on error (each retry logs again)
- Free-tier user runs script hammering API (massive request logs)
Cost monitoring:
Set alerts on:
- Daily log volume > X (early warning)
- Monthly bill > Y (executive alert)
Don''t let cost surprise you. Monitor it.
For my system:
- Current monthly cost
- Top expensive logs
- The reduction levers
Output:
- The cost audit
- The reduction plan
- The monitoring alerts
The biggest cost mistake: **logging set-and-forget.** A logging config that''s right at $1M ARR is wrong at $10M ARR; unchanged, the bill grows linearly. Quarterly cost review; sampling adjustments; retention tier review — these prevent the $40K surprise.
## Avoid Common Pitfalls
Recognizable failure patterns.
The logging mistake checklist.
Mistake 1: Plain-text logs
- Can''t query; can''t aggregate
- Fix: structured JSON
Mistake 2: Logs as metrics
- Counting log lines for stats
- Fix: emit metrics
Mistake 3: Misused log levels
- Everything ERROR; alerts useless
- Fix: meaningful level discipline
Mistake 4: PII in logs
- Compliance violation; security risk
- Fix: redaction policy
Mistake 5: Log + throw double-logging
- 5x volume per error
- Fix: log only at boundary
Mistake 6: No request correlation
- Can''t reconstruct a request''s logs
- Fix: request_id propagation
Mistake 7: Sampling errors
- Lose critical signal
- Fix: never sample errors
Mistake 8: No retention strategy
- Logs kept forever; cost spirals
- Fix: tiered retention
Mistake 9: No cost monitoring
- Wake up to $40K bill
- Fix: daily volume alerts
Mistake 10: Logs nobody reads
- Cost without benefit
- Fix: quarterly audit; remove unused
The quality checklist:
- All logs structured (JSON)
- Standard fields: timestamp, level, event, request_id, tenant_id
- Log levels meaningful + filtered
- PII redaction policy
- No log+throw anti-pattern
- Sampling for high-volume routine
- Errors never sampled
- Request correlation across services
- Tiered retention (hot / warm / cold)
- Cost monitoring + alerts
- Quarterly audit + trim
For my system:
- Audit
- Top 3 fixes
Output:
- Audit results
- Top 3 fixes
- The "v2 logging" plan
The single most-common mistake: **"log everything, sort it out later."** This produces noise floors so high that real signals are buried. The cost is real (ingestion + storage). The benefit is illusory (logs nobody reads). Fix: log deliberately; structure everything; sample routine; never sample errors; review quarterly. Logs are insurance; pay only for the coverage you''ll use.
---
## What "Done" Looks Like
A working logging strategy in 2026 has:
- All logs structured (JSON) with standard schema
- Meaningful log levels (TRACE / DEBUG / INFO / WARN / ERROR / FATAL)
- Standard fields including request_id, tenant_id, service
- PII redaction policy enforced
- Selective sampling (errors always; routine 1-10%)
- Cost monitoring + alerts
- Tiered retention (hot 30 days; warm 90; cold archive)
- Logs that drive incident response in <5 minutes
- Quarterly audit removing unused logs
- Single-source backend with good query DX
The hidden cost of weak logging: **debugging in the dark when it matters most.** A production incident at 2am, customers complaining, on-call engineer paged — and the logs are unstructured plain-text or sampled at 1% so the relevant request''s logs aren''t there. The minutes-to-mitigation stretches from 10 to 60. Customers churn. The cost of weak logging shows up as MTTR (mean time to recovery) — and MTTR shows up as customer trust. Invest deliberately; review constantly; don''t pay for noise.
## See Also
- [Audit Logs](audit-logs-chat.md) — security/compliance log subset
- [Incident Response](incident-response-chat.md) — logs are the first artifact
- [Service Level Agreements](service-level-agreements-chat.md) — uptime depends on logs
- [Performance Optimization](performance-optimization-chat.md) — slow logs surface
- [Rate Limiting & Abuse](rate-limiting-abuse-chat.md) — abuse signals in logs
- [Multi-Tenancy](multi-tenancy-chat.md) — tenant_id discipline
- [Caching Strategies](caching-strategies-chat.md) — cache hit/miss logging
- [Database Indexing Strategy](database-indexing-strategy-chat.md) — slow-query logging
- [VibeReference: Error Monitoring Providers](https://www.vibereference.com/devops-and-tools/error-monitoring-providers) — error backend
- [VibeReference: Observability Providers](https://www.vibereference.com/devops-and-tools/observability-providers) — broader observability
- [VibeReference: LLM Observability Providers](https://www.vibereference.com/ai-development/llm-observability-providers) — AI-specific
- [VibeReference: Vercel Functions](https://www.vibereference.com/cloud-and-hosting/vercel-functions) — Vercel logs
- [LaunchWeek: Trust Center & Security Page](https://www.launchweek.com/4-convert/trust-center-security-page) — log retention as compliance evidence
[⬅️ Day 6: Grow Overview](README.md)