VibeWeek
Home/Grow/Activity Feed & Timeline Implementation — Chat Prompts

Activity Feed & Timeline Implementation — Chat Prompts

⬅️ Back to 6. Grow

If your product has multiple users in the same workspace doing things to shared resources — comments, edits, status changes, file uploads, mentions, assignments, payments, anything — you'll eventually need an activity feed (or timeline, or audit-log, or "what's new"). The naive shape: store every UPDATE in a events table; query it ordered by time on the workspace dashboard. Done in an hour. Then a year later, the table has 200M rows, pagination breaks, the dashboard takes 8 seconds to load, three different teams have built three different "feed" surfaces because the schema doesn't support filtering, and nobody trusts the data because there's no clear definition of what counts as an "event."

A real activity feed is harder than it looks. There are at least 6 distinct surfaces (workspace feed, per-resource timeline, per-user mentions, system-generated digests, audit-log for compliance, real-time live feed) — they share a backbone but diverge in display, filtering, and access control. Get the schema right early; retrofitting is painful.

This chat walks through building an activity feed system from scratch: the event schema, ingestion pattern, fan-out vs fan-in tradeoffs, display rendering with i18n-friendly templates, real-time updates, filtering / search, pagination at scale, retention, and how to support audit-log requirements as a side benefit.

What you're building

  • A canonical activity_events table (single source of truth)
  • Event ingestion API (fire-and-forget from anywhere in your code)
  • Display templates (one event → many surface representations)
  • Workspace feed UI (paginated, filterable, real-time)
  • Per-resource timeline (filtered to one entity)
  • Per-user mentions feed
  • Email + push digests (digest-worthy events only)
  • Audit log export (compliance / GDPR)
  • Retention policy (hot / cold / archive)
  • Pagination that works at scale (cursor, not offset)

1. Design the canonical event schema

Help me design the activity events schema for [Postgres / Supabase / Drizzle / Prisma].

Product context:
- I'm building [my product]
- Resource types I track: [docs / boards / tasks / users / files / payments / ...]
- Users do actions: created, updated, deleted, commented, assigned, mentioned, status_changed, shared, etc.
- I need: workspace feed, per-resource timeline, mentions feed, audit log export

I want a single canonical events table — NOT denormalized per-feed tables. Display variations come from rendering, not storage.

Schema proposal:

activity_events (
  id              uuid pk default uuid_generate_v7()  -- sortable by time
  workspace_id    uuid not null  -- multi-tenancy boundary; index hot
  actor_id        uuid not null  -- user who did the thing
  actor_type      text default 'user'  -- 'user', 'system', 'integration', 'api'
  verb            text not null  -- canonical action: 'created', 'updated', 'commented', 'assigned', 'shared'
  object_type     text not null  -- the primary resource: 'doc', 'task', 'comment'
  object_id       uuid not null  -- the primary resource id
  target_type     text  -- optional secondary entity (e.g. 'user' for assignment, 'doc' for comment-on-doc)
  target_id       uuid  -- optional secondary id
  context         jsonb  -- structured snapshot of relevant fields at the time
  visibility      text default 'workspace'  -- 'workspace', 'team', 'private', 'system'
  occurred_at     timestamptz not null default now()
  ingested_at     timestamptz not null default now()
  client_event_id text  -- for idempotency from clients
  request_id      text  -- correlate with API requests for debugging
)

Indexes:
- (workspace_id, occurred_at DESC)  -- workspace feed query
- (workspace_id, object_type, object_id, occurred_at DESC)  -- per-resource timeline
- (workspace_id, actor_id, occurred_at DESC)  -- per-user activity
- (client_event_id) UNIQUE WHERE NOT NULL  -- idempotency
- GIN on context  -- for filtering by context fields

Design decisions to discuss:

1. Why uuid v7 (not v4)? — sortable by time; eliminates a separate occurred_at index for primary key
2. Why `verb` over a more typed enum? — flexibility; we'll add new verbs over time without migrations
3. Why `context` as JSONB not first-class columns? — most fields are event-specific; structured columns turn into a sparse schema
4. Why store actor + object + target separately? — the 80% case is "user did verb to object"; target is the 20% case (mentions, assignments)
5. Why `visibility` field? — controls who sees the event in feeds (don't show DM events on workspace feed)
6. Why client_event_id? — clients can retry safely without duplicate events
7. Why ingested_at separate from occurred_at? — backfills, batch imports, mobile offline → still want true event time

Things to AVOID:
- Don't denormalize the rendered text into the events table (i18n breaks; templates change)
- Don't fan-out at write time (one event = one row; fan-out at read time or via a feed service)
- Don't try to build "subscriptions" for arbitrary resource pairs in v1 (premature complexity)

Show me:
1. Migration SQL
2. Drizzle/Prisma model
3. TypeScript event type generator
4. RLS policies (Supabase)
5. The query plan for workspace feed pagination (EXPLAIN ANALYZE expected output)

Output: schema you can ship and not regret in 18 months.

2. Implement event ingestion

Now write the event ingestion API.

Stack: [TypeScript + Postgres / Drizzle / your stack]

Goals:
- Fire-and-forget from anywhere in code: emitEvent(...)
- Cannot block user-facing requests if eventing is slow / down
- Idempotent (retry-safe)
- Cheap (target <2ms p99 to enqueue)

Approach: write events to a queue first, batch-flush to the events table.

Architecture:
- emitEvent() pushes to in-process buffer
- Background worker drains buffer → batch INSERT to events table
- If buffer overflows (e.g. 10K events, DB down), spill to a durable queue (Postgres LISTEN/NOTIFY backed table OR Redis OR Upstash)
- On graceful shutdown, drain buffer

API surface:

emitEvent({
  workspaceId: string,
  actorId: string,
  actorType?: 'user' | 'system' | 'integration' | 'api',
  verb: string,
  objectType: string,
  objectId: string,
  targetType?: string,
  targetId?: string,
  context?: Record<string, JsonValue>,
  visibility?: 'workspace' | 'team' | 'private' | 'system',
  occurredAt?: Date,
  clientEventId?: string,
  requestId?: string,
})

Implementation:

class EventEmitter {
  private buffer: ActivityEvent[] = []
  private flushTimer: NodeJS.Timeout | null = null
  
  emitEvent(input: EmitEventInput): void {
    // Validate minimal contract (workspace, actor, verb, object*)
    // Push to buffer
    // Schedule flush in 250ms (or immediately if buffer >= 100)
    // Don't await; never throw
  }
  
  async flush(): Promise<void> {
    if (this.buffer.length === 0) return
    const batch = this.buffer.splice(0, this.buffer.length)
    try {
      await db.insert(activityEvents).values(batch).onConflict({ target: 'client_event_id', do: 'nothing' })
    } catch (err) {
      // Log + spill batch to durable queue
      await spillToQueue(batch)
      this.metrics.recordFailure()
    }
  }
  
  async drain(): Promise<void> {
    // Called on graceful shutdown
    while (this.buffer.length > 0) await this.flush()
  }
}

Queue spillover (when DB writes fail):
- Push batch to Redis list `activity_events_overflow`
- Background worker drains overflow → events table when DB recovers
- Alert if overflow grows past threshold

Verb naming convention:
- Past tense
- Lowercase snake_case
- Specific: 'comment_created' beats 'commented'
- One per action (don't share verbs across object types for filtering purposes)

Actor/object normalization:
- workspaceId is required; multi-tenant safety net
- For system events (no real actor), actorId = SYSTEM_ACTOR_UUID (constant)
- For integration events, actorId = integration's user-equivalent

Examples:

emitEvent({
  workspaceId,
  actorId: userId,
  verb: 'doc_created',
  objectType: 'doc',
  objectId: doc.id,
  context: { title: doc.title, parentFolderId: doc.parentId }
})

emitEvent({
  workspaceId,
  actorId: userId,
  verb: 'mention_created',
  objectType: 'comment',
  objectId: comment.id,
  targetType: 'user',
  targetId: mentionedUserId,
  context: { excerpt: comment.body.slice(0, 200) }
})

Show me the EventEmitter implementation, the spillover worker, the metrics integration, and the graceful-shutdown wiring.

Output: a non-blocking, retry-safe ingestion pipeline.

3. Build the display template system

Now build the display rendering system.

Problem: the same event powers many surfaces (workspace feed, email digest, push notification, audit log row). Each surface needs different rendering. We need ONE canonical place to define: "what does this verb look like, in this surface, for this locale."

Approach: template-per-verb-per-surface, with i18n.

Schema (in code, not DB):

type RenderSurface = 'feed' | 'email' | 'push' | 'audit'

type EventTemplate = {
  verb: string
  surfaces: {
    feed: TemplateFn
    email?: TemplateFn
    push?: TemplateFn
    audit: TemplateFn
  }
}

type TemplateFn = (event: ActivityEvent, locale: string, deps: TemplateDeps) => RenderedEvent

type RenderedEvent = {
  text: string         // primary line ("Alice commented on Q1 Roadmap")
  secondary?: string   // optional 2nd line / preview
  actorAvatar?: string
  objectLink?: string
  objectIcon?: string
  cta?: { label: string, url: string }
}

Example template:

const docCreatedTemplate: EventTemplate = {
  verb: 'doc_created',
  surfaces: {
    feed: (e, locale, deps) => ({
      text: t(locale, 'feed.doc_created', {
        actor: deps.userName(e.actorId),
        title: e.context.title,
      }),
      objectLink: `/docs/${e.objectId}`,
      objectIcon: 'file-text',
    }),
    email: (e, locale, deps) => ({
      text: t(locale, 'email.doc_created', {
        actor: deps.userName(e.actorId),
        title: e.context.title,
        workspace: deps.workspaceName(e.workspaceId),
      }),
    }),
    audit: (e) => ({
      text: `${e.actorId} created doc ${e.objectId}`,
    }),
  }
}

Template registry:

const templates = registerTemplates([
  docCreatedTemplate,
  docUpdatedTemplate,
  commentCreatedTemplate,
  mentionCreatedTemplate,
  // ...
])

function renderEvent(event: ActivityEvent, surface: RenderSurface, locale: string, deps: TemplateDeps) {
  const template = templates[event.verb]
  if (!template) return renderFallback(event, surface)
  const surfaceFn = template.surfaces[surface] ?? template.surfaces.audit
  return surfaceFn(event, locale, deps)
}

Internationalization:
- All template strings go to your i18n provider (e.g., i18next, react-intl, format.js)
- Pluralization rules: ICU MessageFormat (handles "1 person" vs "2 people" correctly)
- Avoid string concatenation: never `actor + " commented on " + title`; always template
- Provide locale fallback chain (es-MX → es → en)

Batch fetching avatars / names:
- TemplateDeps must batch-fetch all referenced users in one query
- Don't N+1 inside the template
- Use DataLoader or simple batch resolver

Edge cases to handle:
- Actor was deleted (show "Deleted user")
- Object was deleted (show event with grayed-out link)
- Workspace renamed since event (show current name? or historical? — pick one rule)
- Locale missing translation (fallback to en, log missing key)

Show me:
1. The template registry implementation
2. A handful of real templates (doc_created, comment_created, mention_created, status_changed, assigned)
3. The DataLoader-based deps batching
4. The fallback rendering
5. The i18n key naming convention

Output: a maintainable rendering layer that handles every surface from one source of truth.

4. Build the workspace feed UI

Now build the workspace activity feed page.

Stack: Next.js + React Server Components + your DB/ORM

UI specs:
- Route: /workspace/[id]/activity
- Top: filter bar (verb category, actor, date range, object type)
- List: vertically stacked event rows with grouping (e.g. "Today" / "Yesterday" / "Last 7 days")
- Each row: actor avatar, rendered text, relative time, object link, optional preview
- Pagination: cursor-based "Load more" (NOT offset; offset breaks at scale)
- Real-time: new events appear at top with subtle animation
- Empty state: friendly illustration + CTA

Server Component (initial render):

async function ActivityFeedPage({ workspaceId, filters, cursor }) {
  const { events, nextCursor } = await fetchEvents({
    workspaceId,
    filters,
    cursor,
    limit: 50,
    visibility: ['workspace', 'team'],  // exclude private/system
  })
  
  const rendered = await renderEvents(events, 'feed', currentLocale, deps)
  
  return (
    <FeedLayout>
      <FilterBar filters={filters} />
      <EventList events={rendered} />
      {nextCursor && <LoadMoreButton cursor={nextCursor} />}
      <RealtimeSubscriber workspaceId={workspaceId} />
    </FeedLayout>
  )
}

Cursor-based pagination:
- Cursor = base64-encoded { occurredAt, id }
- Query: WHERE workspace_id = ? AND (occurred_at, id) < (?, ?) ORDER BY occurred_at DESC, id DESC LIMIT N
- Why both occurred_at AND id: ties on occurred_at (rare but real) need a tiebreaker
- Cursor stays stable even if new events arrive between page loads

Filtering:
- Verb filter: WHERE verb IN (...)
- Actor filter: WHERE actor_id = ?
- Date range: WHERE occurred_at BETWEEN ? AND ?
- Object type: WHERE object_type = ?
- Combine with cursor predicate carefully

Grouping by relative date:
- Done client-side after fetch (server returns flat list)
- Group buckets: Today / Yesterday / This Week / This Month / Older
- Re-evaluated on each render (Today shifts at midnight in user's tz)

Real-time:
- Subscribe via [Supabase Realtime / WebSocket / SSE] to new events for workspace
- Use a "showing N new events" toast pattern (don't shift the page when user is reading)
- Only show toast for events visible to the user under current filters
- Click toast → reload top of feed

Performance budget:
- TTFB: <200ms
- Initial 50 events render: <100ms client-side
- "Load more" round-trip: <300ms
- Real-time event delivery: <2s end-to-end

Edge cases:
- Filter so restrictive nothing shows: empty state with "Clear filters" button
- Workspace just created (no events): onboarding-style empty state with sample events grayed out
- User has NO permission to see most events (admin-only events filtered): only show what they can see
- Rapid event burst (100 events in 5s): throttle real-time updates client-side; "12 new events" badge

Implement:
1. The Server Component page
2. The cursor encode/decode
3. The query function with all filters
4. The realtime subscriber + toast UI
5. The grouping client component
6. The empty + loading + error states

Output: a fast, filterable, real-time activity feed.

5. Build the per-resource timeline

The same events power per-resource timelines (e.g. "history of this doc").

Differences from workspace feed:
- Filter to events where object_id = X OR target_id = X (events ABOUT this resource)
- No verb-category filter usually (show all)
- Different empty state ("No activity on this doc yet")
- Often embedded in a sidebar or modal, not a full page
- Compact rendering (no preview, smaller avatars)

Query:

SELECT * FROM activity_events
WHERE workspace_id = ?
  AND (
    (object_type = ? AND object_id = ?)
    OR (target_type = ? AND target_id = ?)
  )
ORDER BY occurred_at DESC, id DESC
LIMIT 100

Note: this query needs an index on (workspace_id, object_type, object_id, occurred_at DESC) AND on (workspace_id, target_type, target_id, occurred_at DESC).

For high-volume resources (a doc with 10K+ events), consider:
- Default to last 100 events; "Show all" link
- Group consecutive same-actor same-verb events ("Alice made 5 edits")

Compact rendering:
- Use `audit` surface template (more terse than `feed`)
- Show inline diff for status_changed events when available
- Collapse comment bodies; expand on click

Sharing model:
- Per-resource timeline visibility = current viewer's permission to the resource
- If user can see the resource, they can see all events about it (modulo private events)
- For audit-only events (e.g. internal admin actions), filter to admin-only

Build:
1. <ResourceTimeline resourceType="doc" resourceId={id} /> component
2. Compact event row component
3. Consecutive-event grouping logic
4. The 'show full feed' link that opens the workspace feed pre-filtered to this resource

Output: per-resource timelines that fit in a sidebar.

6. Build the mentions feed

A separate surface — events where the current user is the target.

Schema query:

SELECT * FROM activity_events
WHERE workspace_id = ?
  AND target_type = 'user'
  AND target_id = $currentUserId
  AND verb IN ('mention_created', 'assigned', 'review_requested', 'invite_sent')
ORDER BY occurred_at DESC

UI:
- Bell icon in top nav with unread count badge
- Dropdown panel with recent mentions
- "Mark as read" — separate read/unread state per (user, event_id)
- Full-page mentions inbox at /mentions

Read/unread tracking:

user_event_state (
  user_id    uuid,
  event_id   uuid,
  read_at    timestamptz,
  archived_at timestamptz,
  PRIMARY KEY (user_id, event_id)
)

- Unread = no row OR read_at is null
- Archived = archived_at not null (hidden from inbox; still in audit)
- "Mark all read" updates user_state for all currently-unread events for that user

Real-time:
- Subscribe per-user channel
- New mention → bell pulses + count increments
- Optional: native browser notification (if permission granted)
- Optional: native push to mobile app

Notification preferences:
- Per-verb on/off per channel (in-app, email, push)
- "Pause for 1 hour / 4 hours / today" mute toggle
- Email digest (next section)

Build:
1. The mentions dropdown component
2. The full-page mentions inbox
3. The user_event_state schema + helpers
4. The realtime subscriber + bell badge
5. The notification preferences panel

Output: a mentions inbox that doesn't lie to the user about counts.

7. Build email + push digests

Most events shouldn't email users. A small subset should — and even those should be aggregated into a digest.

Digest pipeline:

1. User-config: email digest preference (immediate / hourly / daily / weekly / off)
2. For each user, on schedule, query their unread mentions + watched-resource events since last digest
3. Render them into one email using the `email` surface template
4. Send; mark events as digested

Schema additions:

user_digest_state (
  user_id           uuid pk
  email_frequency   text  -- 'immediate', 'hourly', 'daily', 'weekly', 'off'
  push_frequency    text
  last_email_at     timestamptz
  last_push_at      timestamptz
  email_send_hour   int   -- preferred hour of day for daily/weekly digest (in user tz)
  email_timezone    text  -- e.g. 'America/Los_Angeles'
)

Background job (every 15 min):
- Find users due for a digest based on frequency + last_sent + their tz
- For each user, query events
- If event count >= 1, render and send via your email provider
- Update last_email_at / last_push_at

Anti-spam rules:
- Never digest a user who has only system-generated events (e.g. only "you logged in")
- Cap digest to 50 events; "and 23 more" if more
- Group events by object: "5 comments on Q1 Roadmap" beats 5 separate lines
- Skip digest if user is currently active in the app (last_seen_at < 5 min)

Subject line crafting:
- Daily: "5 updates in [Workspace]"
- Weekly: "Your week in [Workspace]: 3 docs, 12 comments, 2 mentions"
- Personalized when possible: "Alice mentioned you + 4 other updates"

Push notifications:
- ONLY for verbs the user has explicitly opted-in to (mentions, assignments default; everything else off)
- Throttle: max 5 push notifications per user per hour
- Quiet hours respect (user's tz; default 10pm-7am)
- Group notifications on iOS/Android when more than 3 in quick succession

Build:
1. The digest scheduler (cron or queue-based)
2. The digest renderer (uses the `email` surface templates)
3. The push notification dispatcher
4. The unsubscribe + preferences UI
5. The "currently active" detection

Output: digests that bring users back without burning their inbox.

8. Build the audit-log export

Compliance asks: "Can you export every event in our workspace for the last year?"

The same events table powers this — we just need an export endpoint.

API surface:

POST /api/workspaces/:id/audit/export
Body: { from: ISO date, to: ISO date, format: 'csv' | 'json' | 'ndjson' }
Auth: workspace admin only
Response: 202 Accepted with job ID; download link emailed when ready

For small ranges (<10K events): synchronous response.
For large ranges: background job, S3 upload, signed URL emailed to admin.

Audit-row rendering:
- Use the `audit` surface template (terse, machine-readable)
- Include all PII fields (actor names, IPs if logged, etc.)
- ISO 8601 timestamps
- One row per event

CSV columns:
- id, occurred_at, actor_id, actor_email, actor_name, verb, object_type, object_id, object_label, target_type, target_id, target_label, ip_address (if logged), user_agent (if logged), context_json

Compliance considerations:
- Audit log is IMMUTABLE: never let users edit/delete events directly (they can soft-delete the underlying resource; the event remains)
- Retention: minimum required by your compliance regime (SOC 2 = 1 year; HIPAA = 6 years; PCI = 1 year; GDPR = "as long as needed" but typically 2-7 years)
- Some events (security-relevant: login, role change, permission change, billing change) should NEVER be deleted — even if user requests deletion (legitimate-interest exemption under GDPR)
- Export should NOT be deletable by the requestor after creation (audit trail of audits)

Build:
1. The export endpoint (sync small / async large)
2. The CSV/JSON/NDJSON serializers
3. The S3 upload + signed URL flow
4. The "audit log retention policy" page (admin-facing)
5. The job-status polling endpoint

Output: compliance-ready audit log export, no extra schema needed.

9. Plan retention + storage tiers

At scale, the events table grows fast — 1M events/day is plausible for a busy SaaS.

Storage strategy:

Tier 1 — Hot (last 90 days):
- In primary Postgres
- Indexed for all feed queries
- Real-time read path

Tier 2 — Warm (90 days - 2 years):
- Either: same Postgres but partitioned (monthly partitions)
- Or: archived to S3/Parquet; queryable via Trino/Athena/DuckDB on demand
- Read path: fall back to warm tier for date ranges past hot cutoff

Tier 3 — Cold archive (>2 years):
- S3 Glacier / equivalent
- Compliance retention only
- Access via export job (slow, hours)

Postgres partitioning:

CREATE TABLE activity_events (
  ... fields ...
) PARTITION BY RANGE (occurred_at);

CREATE TABLE activity_events_2026_05 PARTITION OF activity_events
  FOR VALUES FROM ('2026-05-01') TO ('2026-06-01');

-- monthly partitions, dropped after retention period

Pros: fast-drop old partitions, smaller indexes per partition.
Cons: complex; requires automation; Postgres-specific.

Alternative: time-series DB (TimescaleDB if Postgres-compat, or ClickHouse for serious scale).

Retention rules:
- Free tier: 30 days
- Paid tier: 1 year (or whatever pricing tier sets)
- Enterprise tier: configurable (3-10 years for compliance)
- Security events (login, role, permission, billing): always 7 years regardless of tier

Build:
1. The partitioning strategy + automation (pg_partman or custom)
2. The "warm tier" S3 archive process
3. The query router that falls back hot → warm → cold
4. The retention enforcement job (drops old partitions / archives)
5. The plan/tier configuration

Output: a sustainable storage strategy that doesn't fail at $10M ARR.

10. Handle the edge cases

Edge cases I'll hit and how to handle:

1. Backfilling events (e.g. importing from another tool)
   - Use occurred_at = original time; ingested_at = now
   - Set client_event_id deterministically to dedupe
   - Skip notifications for backfilled events (mark visibility='system' or add a backfill flag)

2. Editing an event (oops, wrong verb logged)
   - Don't. Events are immutable.
   - Instead: emit a corrective event ('verb_corrected') and tombstone the original via context
   - Audit trail integrity matters more than UX cleanliness

3. Hard-deleting a user (GDPR right-to-erasure)
   - Replace actor_id with anonymized ID in events (keep events; nullify PII)
   - Replace context fields containing PII (user.name, user.email) with [redacted]
   - Don't drop events entirely — workspace still needs the audit trail

4. Event spike (someone scripted 100K events in a minute)
   - Rate-limit emitEvent at the API layer
   - Detect anomaly; alert
   - Don't crash ingestion under spike; spillover queue absorbs

5. Schema migration on context field
   - JSONB context lets you add fields freely
   - If you NEED to migrate (rename a key everywhere), write a one-time migration job; never block emit

6. Cross-workspace events (e.g. a user invited from another workspace)
   - Pick the target workspace as workspace_id
   - Reference source via context.source_workspace_id
   - Or: emit two events (one per workspace)

7. Out-of-order events (mobile offline → catches up after 3 days)
   - occurred_at is the truth; ingested_at differs
   - Feed sorted by occurred_at; the day-3 event slots into the right place
   - Notifications: smart logic to avoid emailing about a 3-day-old mention as if new

8. Event bursts from automations (workflow triggers 50 events on save)
   - Group at render time: "Workflow X ran 50 actions"
   - Or emit as a parent event with children in context

9. Verb deprecation (old verb is renamed)
   - Keep old templates registered (old events still need to render)
   - Map old → new in template registry
   - Never edit historical event verb

10. Workspace rename (event renders 'in old workspace' incorrectly)
    - Use deps.workspaceName(workspaceId) — fetches CURRENT name
    - Document this convention; it's a tradeoff

For each, walk me through the code change and the user-facing impact.

Output: the corner cases handled before you hit them in production.

11. Recap

What you've built:

  • Canonical events schema (single source of truth)
  • Non-blocking event ingestion with retry-safe spillover
  • Template-per-verb-per-surface rendering with i18n
  • Workspace feed (filter, paginate cursor-based, real-time)
  • Per-resource timeline (compact, embedded)
  • Mentions feed with read/unread tracking
  • Email + push digests (frequency-configurable, anti-spam)
  • Audit log export (CSV/JSON, async for large ranges)
  • Retention tiers (hot / warm / cold)
  • Edge cases handled (GDPR delete, backfills, out-of-order, etc.)

What you're explicitly NOT doing in v1:

  • "Following" specific resources for digest opt-in (v2; needs subscriptions schema)
  • Activity-driven recommendations ("you might also like…") (v3+)
  • ML-based event ranking (v3+)
  • Cross-workspace global feeds (v3+)
  • Aggregated insights ("you commented 47 times this week") — could derive cheaply from events table

Ship v1 with a single canonical schema; everything else compounds from there. The biggest mistake teams make: building 4 different per-feed denormalized tables in v1 and never being able to add a new surface without a migration. One table, many renders. That's the move.

See Also