Cron Jobs & Scheduled Tasks: Recurring Work That Won't Wake You Up at 3am
If you're running a SaaS in 2026, you have recurring work — daily reports, hourly sync jobs, weekly cleanup, monthly billing reconciliation, every-5-minute health checks. Most founders default to a single node-cron library, hardcode 30 jobs into a process, and discover six months in that one job has been silently failing for 3 weeks (the database backup), one is overlapping with itself (sending duplicate emails), and one is locked into the timezone of the deploy server (running at 8am UTC instead of 8am customer-local).
A working cron / scheduled-tasks system answers: where do jobs run, how do we prevent overlaps, what happens when one fails, how do we monitor them, and how do we test them without waiting 24 hours. Done well, scheduled work runs invisibly and reliably; done badly, it's a hidden tax of pages, missed reports, and silent data drift that compounds for years.
This guide is the implementation playbook for cron jobs and scheduled tasks in 2026 — how to pick the right execution model (Vercel Cron / GitHub Actions / queue-based / dedicated cron service), patterns that prevent overlap and missed runs, observability so failures don't go unnoticed, and the testing discipline that doesn't require 24-hour wait cycles.
What Counts as a Cron Job (and What Doesn't)
Before building, distinguish scheduled work from event-driven work.
Help me categorize the recurring work I have.
The four categories of "recurring" work:
**1. True scheduled tasks (cron jobs)**
- Run on a fixed time schedule
- Time-of-day matters
- Examples: nightly database backup, weekly digest email, end-of-month billing
- Tool: cron / Vercel Cron / scheduled CI
**2. Polling jobs (recurring intervals)**
- Run every X minutes/seconds; not tied to clock-time
- Examples: poll external API every 5 min; check for new files every minute
- Tool: cron OK; queue-driven often better
**3. Event-driven work (NOT cron)**
- Triggered by an event, not a clock
- Examples: send welcome email on signup; charge customer on subscription renewal date
- Tool: queue / webhook / event bus (per [outbound-webhooks-chat](outbound-webhooks-chat.md))
**4. Workflow / orchestration**
- Multi-step processes with delays / branching
- Examples: drip email sequence; onboarding workflow; complex data pipeline
- Tool: workflow engine (Temporal / Inngest / Vercel Workflow / Trigger.dev)
**Common mis-categorization mistakes**:
- "Send renewal reminder 7 days before subscription end" → event-driven, NOT cron (calculate and schedule one-shot)
- "Send a daily report at 9am customer-local" → cron, but per-customer (timezone-aware)
- "Process payment on monthly recurring date" → event-driven (Stripe handles it), NOT cron
**The "right tool" matrix**:
| Pattern | Best Tool |
|---|---|
| Daily at midnight UTC | Vercel Cron / cron / GitHub Actions |
| Every 5 minutes | Vercel Cron / queue-driven loop |
| One-shot in 7 days | Queue with delay / Workflow / Inngest |
| Multi-step over hours | Workflow engine |
| User-triggered scheduled | Workflow engine / event-driven |
| Per-tenant schedule | Cron with cursor / Workflow engine |
For my system:
- List of all recurring work
- Categorize each (cron / poll / event / workflow)
- The "wrong tool" instances
Output:
1. The recurring-work inventory
2. The categorization
3. The "should be event-driven" reclass
The biggest unforced error: using cron for work that should be event-driven. A nightly cron that scans every user to find renewals due tomorrow is wasteful, error-prone, and timezone-fragile. The fix: when subscription renews, schedule the reminder for "renewal_date - 7 days" once. Event-driven beats batch-scan cron for predictable user-facing schedules.
Pick the Execution Model
In 2026, you have several options. Pick deliberately.
Help me pick the cron execution model.
The five options:
**Option A: Vercel Cron (bundled with Vercel)**
Define cron in `vercel.json`:
```json
{
"crons": [
{ "path": "/api/cron/daily-report", "schedule": "0 9 * * *" }
]
}
Pros:
- Bundled with Vercel deployment
- Runs serverless (no idle cost)
- Tied to deploys (versioned with code)
- Simple
Cons:
- Vercel-only
- Limit on concurrent crons (per plan)
- Function execution time limits (300s default in 2026)
- Single region
Best for: Vercel apps; simple/medium workloads
Option B: GitHub Actions Scheduled Workflows
on:
schedule:
- cron: '0 9 * * *'
Pros:
- Free for public repos; cheap for private
- Versioned with code
- Easy to trigger manually for testing
Cons:
- Schedule can drift / delay (5-15 min)
- Not for time-critical jobs
- Limited execution time
- Public-repo runs visible
Best for: non-time-critical jobs; reports; cleanup
Option C: Dedicated cron service (cron-job.org / EasyCron)
External service hits your URL on schedule.
Pros:
- Vendor-agnostic
- Manual schedule control
- Cheap
Cons:
- Vendor lock-in
- External dependency
Best for: simple HTTP-triggered work; anywhere
Option D: Queue-driven recurring (Inngest / Temporal / Trigger.dev / Vercel Queues)
// Inngest example
inngest.createFunction(
{ id: 'daily-report' },
{ cron: '0 9 * * *' },
async ({ event, step }) => {
// ...
}
);
Pros:
- Built-in retry / observability
- Workflow capabilities (multi-step; delays; branching)
- Per-job execution history
- Vercel Queues GA in 2025
Cons:
- Vendor lock-in (some)
- Higher cost at scale
- Newer category
Best for: complex jobs; jobs that need retries; workflow patterns
Option E: Self-hosted (Docker + cron / systemd timers)
Traditional Linux cron in a container.
Pros:
- Full control
- Cheap at scale
- No vendor lock-in
Cons:
- DevOps overhead
- Single-host failure mode
- Logs / monitoring need separate setup
Best for: heavy batch jobs; cost-sensitive at scale; on-prem
The "right tool" matrix:
| Use Case | Best Tool |
|---|---|
| Daily report on Vercel app | Vercel Cron |
| Nightly cleanup | Vercel Cron / GitHub Actions |
| Time-critical (within seconds) | Queue-driven |
| Complex multi-step | Inngest / Temporal / Trigger.dev |
| Cross-cloud / vendor-neutral | GitHub Actions / dedicated service |
| Heavy batch (data warehouse style) | Self-hosted / scheduled CI |
| Per-customer schedule | Workflow engine with delays |
The "default for indie SaaS in 2026":
- On Vercel: Vercel Cron for simple jobs; Inngest / Vercel Queues for complex
- Other hosting: GitHub Actions Scheduled Workflows
- Heavy / specialized: Inngest or Temporal
Don''t mix more than 2 of these without a reason. Each one is a separate system to monitor.
For my system:
- Current cron infrastructure
- The "right tool" assessment
- The migration plan if needed
Output:
- The execution-model choice
- The mapping of jobs to the chosen system
- The migration plan
The biggest execution-model mistake: **using a long-lived process (`pm2`-managed Node) with `node-cron` for everything.** When the process restarts, cron state is lost; jobs miss runs; restarts during long-running jobs leave inconsistent state. The fix: use serverless cron (Vercel Cron) or queue-driven systems where each invocation is independent and observable.
## Scheduling Patterns That Don't Bite You
Cron expressions are notorious for subtle bugs. Get the patterns right.
Help me write cron expressions correctly.
The cron format:
* * * * *
│ │ │ │ │
│ │ │ │ └── day of week (0-6, Sun-Sat)
│ │ │ └──── month (1-12)
│ │ └────── day of month (1-31)
│ └──────── hour (0-23)
└────────── minute (0-59)
Common expressions:
| Schedule | Cron |
|---|---|
| Every minute | * * * * * |
| Every 5 minutes | */5 * * * * |
| Every hour at :00 | 0 * * * * |
| Daily at midnight UTC | 0 0 * * * |
| Daily at 9am UTC | 0 9 * * * |
| Weekly Monday 9am | 0 9 * * 1 |
| Monthly 1st at midnight | 0 0 1 * * |
| Every 15 minutes | */15 * * * * |
Common mistakes:
1. Day-of-month + day-of-week confusion
0 0 1 * 1 does NOT mean "1st AND Monday" — it means "1st OR Monday" (matches if EITHER is true).
For "first Monday of month": more complex; use code logic.
2. Timezone confusion
Cron schedules run in the cron-runner''s timezone (typically UTC).
0 9 * * * = 9am UTC. In NY (UTC-5), that''s 4am NY local.
For per-customer schedules: store timezone; calculate per-customer.
3. Daylight-saving-time gotchas
A job scheduled "2:30am every day" runs zero times on the day clocks spring forward (no 2:30am). Or twice on the day clocks fall back.
UTC avoids this. Use UTC schedules; convert for display.
4. Calendar-edge cases
0 0 31 * * (31st of every month) skips Feb / Apr / Jun / Sep / Nov. Probably not what you want.
For "last day of month": use code logic (if day === lastDayOfMonth(...)).
5. Overlapping schedules
Two jobs at */5 and */10 will collide every 10 minutes. Plan staggered starts.
The "schedule + jitter" pattern:
For high-volume periodic jobs, add random jitter to avoid thundering herd:
async function dailyJob() {
// Wait 0-300 seconds randomly
await sleep(Math.random() * 300_000);
// Now do the work
}
Useful when many tenants run "daily" jobs and you don''t want all at exactly midnight UTC.
The timezone-per-customer pattern:
For customer-local schedules:
// Cron runs every 15 minutes UTC
async function customerScheduledWorkCron() {
const customers = await db.customers.findMany({ where: { hasSchedule: true } });
for (const customer of customers) {
const localNow = DateTime.now().setZone(customer.timezone);
if (localNow.hour === 9 && localNow.minute < 15) {
await sendDailyReport(customer);
}
}
}
Or precompute the next run time for each customer and use a workflow engine.
For my schedules:
- Audit cron expressions
- Timezone handling
- Edge-case bugs
Output:
- The cron-expression audit
- The timezone strategy
- The DST handling
The biggest schedule-correctness mistake: **assuming local time when cron runs UTC.** "Send the report at 9am" means UTC unless specified; customers see it at 4am their time and complain. Always: store schedules + timezones explicitly; calculate per-customer; communicate timezone in user-facing messaging.
## Prevent Overlap and Lost Runs
Two failure modes: jobs running concurrently with themselves, and jobs missing runs entirely. Both are common.
Help me prevent overlap and lost runs.
The overlap problem:
A job scheduled */5 * * * * (every 5 minutes) starts to take 7 minutes due to slow DB. Now 2 instances run concurrently. Both update same data. Bad.
Solutions:
1. Distributed lock (Redis)
async function withLock(lockKey: string, fn: () => Promise<void>) {
const lockValue = uuid();
const acquired = await redis.set(lockKey, lockValue, 'NX', 'EX', 600); // 10-min lock
if (!acquired) {
console.log('Job already running; skipping');
return;
}
try {
await fn();
} finally {
// Release lock only if we still own it
const current = await redis.get(lockKey);
if (current === lockValue) await redis.del(lockKey);
}
}
// Usage
await withLock('cron:daily-report', async () => {
// Work
});
2. Database-backed mutex
-- Try to acquire; INSERT IF NOT EXISTS
INSERT INTO job_locks (name, acquired_at)
VALUES ('daily-report', now())
ON CONFLICT (name) DO NOTHING
RETURNING *;
If RETURNING is empty: lock held; skip.
At end: DELETE FROM job_locks WHERE name = '...'.
3. Increase interval / reduce work
If interval is too tight, increase it. Don''t schedule every 1 minute work that takes 5 minutes.
4. Workflow-engine native (Inngest / Temporal)
These tools handle "only one instance running" natively (concurrency: 1).
The lost-runs problem:
Job is scheduled 0 */1 * * * (every hour). Server is down at the scheduled time. Run is missed.
Solutions:
1. Catch-up on next run
Track last successful run; on each invocation, process all "missed" intervals:
async function hourlyJob() {
const lastRun = await getLastSuccessfulRun('hourly-job');
const intervalsToProcess = countIntervalsSince(lastRun, 'hourly');
for (let i = 0; i < intervalsToProcess; i++) {
await processInterval(...);
}
await markRunSuccessful('hourly-job', new Date());
}
2. Use a workflow engine with at-least-once
Inngest / Temporal / Vercel Queues guarantee at-least-once execution; missed runs are retried.
3. Idempotency
Make jobs idempotent — running twice produces same result as running once. Now duplicate runs aren''t catastrophic.
// Idempotent
await db.upsert({ where: { date: today }, create: ..., update: ... });
// NOT idempotent
await db.create(...);
The "monitor for missed runs":
Set up alerts:
- "Daily report hasn''t run in 26 hours" — page on-call
- "Hourly job hasn''t run in 90 minutes" — page on-call
Use Better Stack / Cronitor / Healthchecks.io for cron monitoring.
For my jobs:
- Locking strategy per job
- Missed-run handling
- Idempotency audit
Output:
- The locking patterns
- The catch-up strategy
- The idempotency audit
The biggest overlap mistake: **no lock; job runs twice; data corrupts.** A cleanup job that deletes records based on age, run twice, deletes some records you didn''t want deleted. The fix: every cron with a tail-time longer than ~30 seconds gets a Redis lock or DB mutex.
## Idempotency: Make Jobs Safe to Re-Run
Idempotency turns "did the job complete?" from a critical question into a non-event.
Help me make jobs idempotent.
The principle: running the job N times produces the same result as running once.
Patterns:
1. Upsert instead of insert
// Bad: duplicates on retry
await db.dailyMetrics.create({ date, value });
// Good: idempotent
await db.dailyMetrics.upsert({
where: { date },
create: { date, value },
update: { value },
});
2. Status-tracked work
-- Find work to do
SELECT * FROM tasks WHERE status = 'pending' AND scheduled_at < now()
LIMIT 10
FOR UPDATE SKIP LOCKED;
-- Mark in-progress
UPDATE tasks SET status = 'in_progress', started_at = now() WHERE id = $1;
-- Do work
...
-- Mark done
UPDATE tasks SET status = 'completed', completed_at = now() WHERE id = $1;
If job dies mid-way, in-progress tasks can be retried (after a timeout-recovery cron).
3. External-service idempotency keys
For external-API calls (Stripe, etc.), use idempotency keys:
await stripe.charges.create(
{ amount, currency, customer },
{ idempotencyKey: `daily-charge-${customer.id}-${date}` }
);
Stripe deduplicates: re-running the charge with same idempotency key returns the original result.
4. Time-windowed idempotency
For "send daily email" jobs:
-- Has email already been sent today?
SELECT 1 FROM emails_sent
WHERE customer_id = $1 AND email_type = 'daily_report' AND sent_date = current_date;
-- If exists, skip
The 5xx-retry rule:
Webhooks and cron jobs can fire multiple times. Plan for it.
If your job sends an email: track which emails were sent (don''t double-send). If your job charges money: use idempotency keys (don''t double-charge). If your job updates DB: use upsert (don''t double-create).
The "exactly-once is a myth" reality:
There is no "exactly-once" delivery in distributed systems. Plan for at-least-once + idempotency.
For my jobs:
- Idempotency audit
- Patterns to apply
- The "this could double-fire" risks
Output:
- The idempotency audit
- The fixes per job
- The pattern guide
The biggest idempotency mistake: **assuming jobs run exactly once.** A cron triggers twice; your code creates two records / sends two emails / charges twice. The fix: every cron that produces side effects must be idempotent. Every. Single. One.
## Observability: Know When Jobs Fail (Before Customers Do)
A failing cron is silent until someone notices. Make it loud.
Help me make cron jobs observable.
The five things to monitor:
1. Did it run?
- Last successful run timestamp
- Alert if no run in expected window
- Tools: Cronitor / Healthchecks.io / Better Stack
// At start of job
await fetch(`https://hc-ping.com/${HC_UUID}/start`);
// At end (success)
await fetch(`https://hc-ping.com/${HC_UUID}`);
// At end (failure)
await fetch(`https://hc-ping.com/${HC_UUID}/fail`);
If "/" ping doesn''t arrive within window: alert.
2. Did it succeed?
- Track success/failure status
- Alert on failure
- Track failure rate over time
3. How long did it take?
- Track duration
- Alert on duration anomalies (job that normally takes 2 min taking 30 min = problem)
4. What did it do?
- Log work units processed
- Compare to expectations ("we processed 0 invoices today; that''s wrong")
5. Are we falling behind?
- For queue-based jobs, queue depth
- For batch jobs, "what''s the oldest unprocessed item"
The cron-monitoring tools (2026):
| Tool | Cost | Best for |
|---|---|---|
| Healthchecks.io | Free / $20/mo | Indie; simple ping monitor |
| Cronitor | $7/mo+ | Mid-market |
| Better Stack | $24/mo+ | Includes log + uptime |
| Vercel Cron Dashboards | Bundled | Vercel-native |
| Inngest / Trigger.dev | Bundled | Workflow execution history |
| Datadog Cron | Bundled | Enterprise |
The structured-log-per-job pattern:
async function runJob(jobName, fn) {
const start = Date.now();
const runId = uuid();
log.info('cron.start', { job: jobName, run_id: runId });
try {
const result = await fn();
log.info('cron.success', {
job: jobName, run_id: runId,
duration_ms: Date.now() - start,
records_processed: result.count,
});
} catch (e) {
log.error('cron.failure', {
job: jobName, run_id: runId,
duration_ms: Date.now() - start,
error: e.message,
});
throw e;
}
}
Per logging-strategy-structured-logs-chat.
The dashboard:
Single dashboard showing:
- All scheduled jobs
- Last run timestamp + status
- Average duration
- Failure rate (last 24h / 7d / 30d)
- Alerts (jobs that are overdue)
If you can''t answer "is my cron healthy?" in <10 seconds, build the dashboard.
For my jobs:
- Current monitoring
- Gaps (which jobs aren''t monitored?)
- The dashboard plan
Output:
- The monitoring setup per job
- The alert rules
- The dashboard mockup
The biggest observability mistake: **silent cron failure.** Your nightly database backup hasn''t run in 3 weeks; nobody notices until a restore is needed and the most recent backup is from August. The fix: every cron has a monitor; missed run = page. The cost is $20/mo; the alternative cost is hours / days of pain when something goes wrong.
## Test Without Waiting for the Schedule
Cron testing is brutal if you have to wait 24 hours. Don''t.
Help me test cron jobs locally.
The patterns:
1. Extract the work into a function
Don''t put the cron logic in the cron itself:
// Bad: tied to cron
app.get('/api/cron/daily-report', async (req, res) => {
const data = await db....;
await email.send(...);
res.json({ ok: true });
});
// Good: separable
export async function generateDailyReport(opts = {}) {
// Logic
}
// Cron route just calls it
app.get('/api/cron/daily-report', async (req, res) => {
await generateDailyReport();
res.json({ ok: true });
});
Now you can:
- Test the function in unit tests
- Trigger it manually via CLI
- Trigger via webhook for staging tests
2. Manual trigger endpoints
// Hit this URL during testing
app.post('/api/admin/run-job', auth, async (req, res) => {
const { job, params } = req.body;
await runJobByName(job, params);
res.json({ ok: true });
});
In dev: hit it directly. In staging: hit before scheduling.
3. Date / time injection
Don''t hardcode new Date():
// Bad
const yesterday = new Date(Date.now() - 86400_000);
// Good
function generateReport(now = new Date()) {
const yesterday = new Date(now.getTime() - 86400_000);
// ...
}
// In tests
generateReport(new Date('2026-04-30'));
4. Dry-run mode
async function dailyJob({ dryRun = false }) {
const work = await loadWork();
if (dryRun) {
console.log(`Would process ${work.length} items`);
return work;
}
// Actual work
}
Run dry first; verify; then production.
5. Subset for testing
async function dailyJob({ tenantIds = null }) {
const tenants = tenantIds
? await db.tenants.findMany({ where: { id: { in: tenantIds } } })
: await db.tenants.findMany();
// ...
}
Test with one tenant before scheduling for all.
6. Local cron simulation
Use a tool to simulate cron locally:
# Run "daily" cron immediately
curl -X POST http://localhost:3000/api/cron/daily-report
Or a wrapper:
if (process.env.NODE_ENV === 'development') {
// Skip auth; allow direct trigger
}
The CI / staging cycle:
- Unit test: pure function logic
- Integration test: function + DB
- Staging: full cron path on staging schedule
- Production: deploy with monitoring; watch first run
For my jobs:
- Testing approach per job
- Manual-trigger setup
- Staging schedule
Output:
- The test patterns
- The manual-trigger endpoints
- The CI plan
The biggest testing mistake: **shipping a cron untested in production.** The first run on production is the first time the code has actually executed in that environment; bug found at 3am. The fix: extract logic; manual-trigger in staging; verify before scheduling.
## Job Catalog and Documentation
Cron jobs accumulate. Without inventory, they become a mystery.
Help me catalog scheduled jobs.
The catalog format:
For each job:
| Field | Example |
|---|---|
| Name | daily-database-backup |
| Schedule | 0 2 * * * (2am UTC) |
| Owner | Eng team |
| Purpose | Backup primary DB to S3 |
| Duration (typical) | 5 minutes |
| Failure impact | High (data-loss risk) |
| Alert threshold | Failure or > 30 min |
| Runbook | Link to docs |
| Last reviewed | 2026-04-30 |
The single source of truth:
A docs page (Notion / GitHub README / Confluence) listing every scheduled job:
- Cron schedule (canonical)
- What it does
- Who owns it
- How to test
- What to do if it fails
When a new engineer joins: they can answer "what jobs run?" in 10 minutes.
The "is this still needed?" review:
Quarterly:
- Walk through all jobs
- Question: still needed?
- If a job hasn''t been "useful" in 6 months (no business outcome): kill it
Dead crons accumulate. Audit and prune.
The version-control rule:
All cron schedules should be in code (or vercel.json / GitHub Actions yaml). NEVER in a UI someone can change.
Why:
- Versioned with code
- Reviewable in PR
- Reproducible across environments
Don''t:
- Have crons in some random server''s crontab
- Have crons defined in a UI dashboard (not git-tracked)
- Have undocumented jobs ("I think Mike set that up")
For my system:
- Catalog every cron
- Owner per cron
- Quarterly review schedule
Output:
- The job-catalog template
- The single-source-of-truth doc
- The quarterly review cadence
The biggest catalog mistake: **no catalog.** Three years in, nobody knows all the jobs running; some are zombie (job for a feature deprecated long ago); some are silently failing because nobody owns them. The fix: catalog on day one; review quarterly; require new crons to be added to the catalog.
## Avoid Common Pitfalls
Recognizable failure patterns.
The cron mistake checklist.
Mistake 1: Long-lived process for cron
- node-cron in pm2; restarts lose state
- Fix: serverless cron / queue
Mistake 2: Wrong tool for use case
- Cron for renewal reminders (should be event-driven)
- Fix: pick right pattern
Mistake 3: Timezone confusion
- Customers see jobs at wrong local time
- Fix: store schedule + timezone; calculate per-customer
Mistake 4: No locking
- Job overlaps with itself
- Fix: Redis lock / DB mutex
Mistake 5: Not idempotent
- Re-run causes duplicates
- Fix: upsert / idempotency keys
Mistake 6: No monitoring
- Silent failure for weeks
- Fix: ping monitor + alerts
Mistake 7: Missed runs
- Server down; job missed; lost forever
- Fix: catch-up + workflow engine
Mistake 8: Untestable
- Logic embedded in cron handler
- Fix: extract pure functions
Mistake 9: Undocumented
- Nobody knows what crons run
- Fix: catalog
Mistake 10: Zombie crons
- Jobs running for features deprecated long ago
- Fix: quarterly review + kill
The quality checklist:
- Right execution model (Vercel Cron / Inngest / etc.)
- Schedule documented + versioned
- Timezone strategy clear
- Locking for jobs > 30s
- Idempotent
- Monitored (ping + alert)
- Logs structured
- Manually triggerable for testing
- Owner + runbook documented
- Quarterly review
For my jobs:
- Audit
- Top 3 fixes
Output:
- Audit
- Fixes
- The "v2 cron infrastructure" plan
The single most-common mistake: **treating cron as set-and-forget.** Schedule a job; assume it works; never look again. Six months later: silently failing for 6 months; data inconsistencies; customer support tickets. The fix: every cron has an owner, a monitor, a runbook, and a quarterly review. Treat scheduled work like production code, because it is.
---
## What "Done" Looks Like
A working cron / scheduled-task system in 2026 has:
- Right execution model (Vercel Cron / Inngest / GitHub Actions / etc.) per job
- Schedules in code / config (versioned; reviewable)
- Timezone-aware for customer-facing schedules
- Locking on overlapping-risk jobs
- Idempotency on side-effect jobs
- Ping-based monitoring + alerts
- Structured logs per run
- Manual-trigger endpoint for testing
- Catalog of all jobs with owners
- Quarterly review removing zombies
The hidden cost of weak cron infrastructure: **silent compounding failures.** The backup that hasn''t run in 6 months; the cleanup that''s been duplicating records; the report that times out partway. None of these page someone; all of them eventually surface. Cron jobs are infrastructure; treat them with the same observability + ownership discipline as any other production code. Cheap insurance; pays back constantly.
## See Also
- [Outbound Webhooks](outbound-webhooks-chat.md) — event-driven alternative
- [Background Jobs Providers](https://www.vibereference.com/backend-and-data/background-jobs-providers) — queue infrastructure
- [Logging Strategy & Structured Logs](logging-strategy-structured-logs-chat.md) — cron logs
- [Service Level Agreements](service-level-agreements-chat.md) — uptime depends on jobs
- [Database Migrations](database-migrations-chat.md) — schema changes affect cron
- [Caching Strategies](caching-strategies-chat.md) — cache invalidation jobs
- [Backups & Disaster Recovery](backups-disaster-recovery-chat.md) — backup is a cron
- [Audit Logs](audit-logs-chat.md) — log job runs
- [Email Deliverability](email-deliverability-chat.md) — scheduled email batches
- [Dunning & Failed Payments](dunning-failed-payments-chat.md) — retry-cron use case
- [VibeReference: Vercel Functions](https://www.vibereference.com/cloud-and-hosting/vercel-functions) — Vercel Cron
- [VibeReference: Vercel Workflow](https://www.vibereference.com/cloud-and-hosting/vercel-workflow) — workflow engine
- [VibeReference: Vercel Queues](https://www.vibereference.com/cloud-and-hosting/vercel-queues) — queue-based scheduled
- [VibeReference: Workflow Automation Providers](https://www.vibereference.com/devops-and-tools/workflow-automation-providers) — Inngest / Temporal / etc.
[⬅️ Day 6: Grow Overview](README.md)