CSV Import Flows: Help Customers Bring Their Data Without Breaking Production
CSV Import Strategy for Your New SaaS
Goal: Ship a CSV import flow customers actually trust — column mapping that handles their messy real-world spreadsheets, validation that explains errors in plain English, idempotent imports that can be re-run safely, async processing for large files, and a result UI that shows what worked and what didn't. Avoid the failure modes where founders ship a "browse for file" button that crashes on row 50,000 (sync processing dying on big files), expects perfect column headers (real customer files never match), or gives generic "import failed" errors with no path to recovery.
Process: Follow this chat pattern with your AI coding tool such as Claude or v0.app. Pay attention to the notes in [brackets] and replace the bracketed text with your own content.
Timeframe: Basic upload + parse + insert flow shipped in 2-3 days. Column mapping + validation + async processing in week 1. Error reporting + re-import + audit in week 2. Quarterly review baked in.
Why Most Founder CSV Imports Are Broken
Three failure modes hit founders the same way:
- Sync processing. Founder writes
app.post('/import', upload, (req, res) => { for (const row of parseCsv(req.file)) { await db.insert(row) } res.send('done') }). Customer uploads a 50K-row file. The HTTP request times out at 30 seconds. The customer doesn't know if half the rows imported or none. The founder spends two hours reconciling state by hand. - Header-strict parsing. Founder builds the importer assuming the CSV has columns
email,first_name,last_name. Customer's export hasEmail Address,First,Last. Importer rejects every row. Customer rage-tweets. Founder ships a "header mapping" hack as v2 that's now technical debt forever. - Opaque errors. Import fails. UI says "Import failed (3 errors)". Customer has no idea which rows, which fields, what to fix. They re-upload the same file three times before contacting support. Support has no logs because the founder didn't persist row-level status.
The version that works is structured: file uploaded to object storage, parsed asynchronously, validated row-by-row, idempotent against re-import, errors surfaced per row with actionable messages, and results visible in a results UI customers can come back to.
This guide assumes you have already done Authentication (imports are user-scoped), have considered Background Jobs Providers (async processing is mandatory), have shipped Audit Logs (every import is a high-value audit event), and have a strategy for File Storage (you'll persist the original file).
1. Decide What You're Importing
Before writing code, define the import surface precisely. Different shapes, different design decisions.
Help me design the import surface for [my product].
The common import shapes:
**1. Reference-data imports** (the 60% case)
- Importing a list of independent records: contacts, products, leads, properties
- Each row is one entity; no relationships between rows
- Easy: just validate and insert
**2. Hierarchical imports**
- Records have parent/child relationships (companies → contacts; orders → line items)
- Either: split into two CSVs, or use a foreign-key column the customer fills in
- Harder: the parent must exist before the child, or be created in the same import
**3. Update-or-create (upsert) imports**
- Customer has data with stable IDs and wants to update existing records or create missing ones
- Requires a "match key" column (email, external_id, custom UID)
- Critical: idempotent — re-running the same file shouldn''t duplicate
**4. Bulk-action imports**
- Customer uploads a list of IDs to act on (delete these 500 contacts; tag these 1000 leads)
- Action is the same for every row; row data is just identifiers
- Lighter validation; heavier on action confirmation
**Critical design decisions for v1**:
1. **Which entity types do customers want to import?** Pick the top 1-3 by frequency; ship those first. Don''t ship import for 12 entity types in v1.
2. **Are these new-only or upsert?** Default to new-only in v1; add upsert when customers ask for "I want to re-run my export from system X."
3. **What''s the match key for upsert?** (Email? External ID? Custom UID column?) This decision is sticky once shipped.
4. **What''s the maximum file size?** Set a hard limit (typically 100MB or 500K rows). Over that, suggest splitting the file or use the API.
5. **Are there required columns?** Make the list short — even a single required column kills imports if the customer''s file lacks it.
**Anti-patterns to avoid**:
- Importing every entity type in v1 (over-engineered)
- Allowing arbitrary nested data (causes complex validation)
- Hard limits that aren''t documented (customer hits 10,001 rows on a "10K limit" without warning)
- "Import everything" as a single flow (use one importer per entity type)
For my product, ask:
- What are the top 3 entity types my customers ask to import?
- For each: new-only or upsert? If upsert, what''s the match key?
- What columns are mandatory vs optional?
- What''s the data my product DOESN''T accept via import (and why)?
Output:
1. The v1 importer catalog (1-3 entity types)
2. The required + optional columns per type
3. The match key strategy per type
4. The size limits
5. The entity types you''re NOT shipping in v1 (and why)
The single most undervalued upfront decision: picking which entity types matter. Most founders start with "import contacts" because that''s easy to imagine. Real customer pain might be "import deals from Salesforce" or "import line items from QuickBooks" — find out before building.
2. Design the Upload + Parse Step
The customer's first interaction is the upload. Get it right.
Help me design the upload UI and the parse step.
The pattern:
**Upload UI**:
- A "Browse" or drag-and-drop zone
- Accept: `.csv`, `.tsv`, optionally `.xlsx`/`.xls` (Excel — common ask)
- Show file size and row count after parse (preview before commit)
- For large files, upload to object storage directly (presigned URL pattern), not through your app server
**Parse step (server-side)**:
1. Validate file size and type
2. Stream-parse the first ~100 rows (don''t parse the whole file yet)
3. Detect encoding (UTF-8 / UTF-16 / Latin-1) — most parsers handle this with a hint
4. Detect delimiter (comma / tab / semicolon — yes, semicolons are real, especially European exports)
5. Detect headers (first row is usually headers; verify)
6. Detect column types (string / number / date / boolean) by sampling
7. Return preview: column names, types, sample values, total row count (estimated)
**Storage**:
```sql
CREATE TABLE csv_imports (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id),
workspace_id UUID NOT NULL REFERENCES workspaces(id),
entity_type TEXT NOT NULL, -- "contacts", "deals", etc.
file_url TEXT NOT NULL, -- object-storage URL for the original
file_size_bytes BIGINT NOT NULL,
total_rows INT, -- populated after parse
status TEXT NOT NULL DEFAULT 'uploaded', -- uploaded / parsing / mapping / processing / completed / failed
column_mapping JSONB, -- customer''s field-mapping decisions
rows_succeeded INT DEFAULT 0,
rows_failed INT DEFAULT 0,
rows_skipped INT DEFAULT 0,
started_at TIMESTAMP,
completed_at TIMESTAMP,
error_message TEXT,
created_at TIMESTAMP NOT NULL DEFAULT NOW()
);
CREATE TABLE csv_import_rows (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
import_id UUID NOT NULL REFERENCES csv_imports(id) ON DELETE CASCADE,
row_number INT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending', -- pending / succeeded / failed / skipped
raw_data JSONB NOT NULL, -- the original row
resulting_entity_id UUID, -- pointer to the created/updated entity, if succeeded
error_message TEXT, -- if failed
processed_at TIMESTAMP
);
CREATE INDEX idx_import_rows_import ON csv_import_rows(import_id);
Critical implementation rules:
- Use a battle-tested parser. Papa Parse (JS),
fast-csv, Pythonpandas/csv— never roll your own. - Stream large files. Loading a 100MB CSV into memory crashes the server.
- Persist the original file to object storage; you''ll need it for re-imports and debugging.
- Show the preview before processing. Customer confirms "yes that''s my data" before kicking off the import.
- Limit upload sizes at the gateway (Nginx, Cloudfront) so massive files don''t even reach your app.
Don''t:
- Parse the entire file synchronously to count rows (estimate from file size; correct after async parse)
- Trust the file extension (verify the actual content type)
- Skip encoding detection (mojibake destroys customer trust)
- Run the upload through your app server when object storage works fine
Output:
- The upload UI component
- The presigned-upload flow if using S3 / R2 / Vercel Blob
- The parse function with encoding/delimiter detection
- The csv_imports + csv_import_rows schema
- The preview UI
The biggest UX win: **showing a 5-row preview after upload.** Customer eyeballs "yes those are the right columns and the values look right" before they start the import. Catches misencoded files, wrong-file uploads, and unexpected column orders.
---
## 3. Build a Forgiving Column Mapper
Real CSVs never match your schema. The column mapper is what turns "Email Address" into `email` and "First" into `first_name` without making the customer rename their export.
Design the column mapping UI.
The pattern:
After parsing the preview, show a mapping screen:
- Left column: "Your CSV column" (read-only, taken from the file)
- Right column: "Maps to" (dropdown with your product''s fields)
- For each CSV column: pre-select the most likely match using fuzzy heuristics
- Customer adjusts as needed
- A "skip" option for columns the customer doesn''t want imported
- Required fields are flagged: if not mapped, can''t proceed
Auto-mapping heuristics:
- Exact match (case-insensitive): "email" → email
- Common synonyms: "Email Address" / "E-mail" / "Mail" → email
- Snake/camel/kebab variants: "first_name" / "firstName" / "first-name" / "First Name" → first_name
- Pluralization: "tags" / "tag" → tags
- Trailing-number stripping: "phone1" / "phone2" → both map to "phone" by default
Build a fuzzy-match function that scores each CSV column against each product field using:
- Exact match: 100
- Synonym match: 80
- Levenshtein distance < 3: 60
- Substring match: 40
- No match: 0
Pre-select if score >= 60; let customer override.
Column mapping persistence:
- Save the mapping per (workspace_id, entity_type)
- On next import, pre-fill from the saved mapping
- Customer sees "Using your previous mapping; review below" with confirmation
Field-level transformation options:
For each mapping, allow common transformations:
- Trim whitespace (default ON)
- Lowercase / uppercase
- Date format hint ("MM/DD/YYYY" vs "DD/MM/YYYY" — Europe vs US is a real problem)
- Boolean parsing ("true" / "yes" / "1" all → true)
- Default value if blank (e.g., default tag to "imported")
Don''t:
- Force the customer to rename their CSV columns to match your schema
- Lose the saved mapping when fields change (handle gracefully: prompt customer to remap)
- Allow mapping the same product field to multiple CSV columns (validate this; fail clearly)
Output:
- The fuzzy-match function
- The mapping UI component
- The persistence schema for saved mappings
- The transformation options per field
The single most important UX detail: **pre-selecting good matches.** A customer who sees "Email Address → Email (auto-matched)" with 8 of 10 columns pre-mapped just clicks through. A customer who sees 10 unmapped dropdowns rage-quits.
---
## 4. Validate Row by Row, Not All-Or-Nothing
A 50K-row file with 30 bad rows shouldn't fail entirely. Process valid rows; collect errors; let the customer fix and re-import the failed ones.
Design the validation strategy.
The pattern:
For each row:
- Apply the column mapping to extract values
- Apply transformations (trim, lowercase, date parse, etc.)
- Validate each field:
- Required: present?
- Type: parseable as the declared type?
- Format: matches expected pattern (email syntax, URL syntax)?
- Range: numeric value within reasonable bounds?
- Foreign key: referenced entity exists?
- Uniqueness: duplicate within the file or against existing data?
- If valid: insert/upsert; record success
- If invalid: record the error with row_number and field; keep going
Common validation errors and what to do:
| Error | Action |
|---|---|
| Required field missing | Skip row; record error |
| Invalid email format | Skip row; record error |
| Invalid date format | Skip row; record error with sample expected format |
| Duplicate within file | Skip the second occurrence; record warning |
| Duplicate against existing data | Either skip OR upsert based on import setting |
| Foreign key not found | Skip row; record error suggesting customer add the parent record |
| Value too long for field | Skip row; record error |
| Value type mismatch (e.g., "abc" in a number field) | Skip row; record error |
Per-row state:
For each csv_import_rows row, set:
status: succeeded / failed / skippederror_message: human-readable error ("Invalid email format: 'not-an-email'")resulting_entity_id: if succeeded, the ID of the created/updated record
Critical implementation rules:
- Don''t halt the import on error. A bad row shouldn''t stop a good one.
- Errors must be human-readable. "Email validation failed" is bad; "Invalid email format: 'x@'" is good.
- Reference the row by 1-indexed row number (row 1 = first data row after the header) — that''s how customers see it in their spreadsheet app.
- Limit collected errors to a reasonable cap (e.g., first 1000 errors per import) to keep the UI usable.
- Aggregate similar errors. "47 rows: invalid email format" is more useful than 47 individual errors.
Don''t:
- Roll back the entire import on a single failure (frustrating)
- Skip rows silently (customer can''t fix what they don''t see)
- Use stack traces as error messages (technical leak; useless to customers)
Output:
- The validation function per entity type
- The error message templates (one per error class)
- The per-row status tracking code
- The aggregation logic for error reporting
The single biggest customer trust win: **showing exactly which rows failed and why, in plain English, with the original values.** A customer who sees "Row 47: invalid email 'jdoe@example' (missing TLD)" can fix and re-import in 30 seconds. A customer who sees "Import partially failed (47 errors)" is filing a support ticket.
---
## 5. Process Asynchronously
Imports take time. Don't make the customer wait on the page.
Design the async processing pipeline.
The pattern:
Phase 1: Receive (HTTP handler)
- Customer confirms the mapping; clicks "Start import"
- Backend updates
csv_imports.status = 'processing' - Backend enqueues a background job (per Background Jobs Providers)
- HTTP response returns immediately with the import ID and a "see results" URL
Phase 2: Process (background worker)
- Worker streams the CSV from object storage
- For each row: validate, transform, insert/upsert (per step 4)
- Update
csv_import_rowsper row - Update
csv_imports.rows_succeeded/rows_failedcounters periodically (every 100 rows) - On completion: set
status = 'completed'; send notification (email or in-app)
Progress UX:
A page at /imports/[import-id] showing:
- Status: parsing / processing / completed / failed
- Progress bar: rows processed / total
- Current rate: rows/second
- Estimated time remaining
- Once complete: success/failure breakdown
Notifications:
When the import completes:
- In-app: badge / notification dropdown
- Email: "Your import of [filename] is done. [N] succeeded, [M] failed. See details: [link]"
Concurrent import limits:
- One import per workspace at a time (queue subsequent ones)
- Or: cap concurrent imports per worker; queue overflow
- Prevents one large import from starving others
Critical implementation rules:
- Stream, don''t load. A 100K-row CSV in memory is a recipe for OOM crashes.
- Batch inserts. Don''t insert one row at a time; batch 500-1000 rows per transaction.
- Use transactions per batch, not per import. A failed batch shouldn''t roll back successful ones.
- Log progress every N rows so the UI can show "Processed 12,450 of 50,000."
- Handle worker crashes. A killed worker should leave the import in a recoverable state (resumable from last batch).
Don''t:
- Process inside the HTTP request
- Hold the connection open while processing
- Skip progress logging (5-minute imports with no progress feel broken)
Output:
- The job-processing code
- The progress-update strategy
- The notification mechanism
- The concurrency-limiting policy
The single biggest reliability win: **batched inserts with progress checkpoints.** A worker that dies at row 12,450 of 50,000 can resume from row 12,000 if you checkpoint every 500 rows. Without checkpoints, one crash means the customer re-imports from scratch.
---
## 6. Make It Idempotent (and Re-Importable)
Customers will re-run imports. Either by accident or because they fixed errors. The flow has to handle this gracefully.
Design idempotency and re-import.
Idempotency strategies:
Strategy A: Match-key upsert
- For entity types that support upsert: match against a stable key (email, external_id, slug)
- On match: update existing record
- On no-match: create new record
- The same file, run twice, produces the same end state
Strategy B: Import-tagged inserts
- Each import gets a unique
import_id - Every created record stores its
source_import_id - Re-running creates a new import with new IDs (not idempotent at the record level, but traceable)
- Use when upsert isn''t practical (no stable key)
Strategy C: Fuzzy matching with confirmation
- Match by approximate fields (email + first name + last name)
- For matches, show "We found similar records — update them, skip them, or create duplicates?"
- More work; reserved for high-stakes imports
Re-import flow:
After an import completes:
- If
rows_failed > 0: show "Re-import failed rows" button - That button generates a new CSV (just the failed rows + their original errors)
- Customer downloads it, fixes locally, uploads again
- The re-import only processes the rows; doesn''t duplicate the previously-succeeded ones
Implementation:
- The "fix this" download is a server-side endpoint that streams a CSV of failed rows
- The customer''s edits become a new import (not a continuation of the old one)
- Audit links the new import to the old via a
parent_import_id
Critical rules:
- Document idempotency clearly. Customers need to know: "Re-running this CSV will UPDATE existing records, not create duplicates." Or the opposite — make it explicit.
- Don''t silently update. If the customer expects new-only and gets upsert, surprises happen.
- Allow undo for the last import. A "rollback this import" button (where feasible) is a great safety net.
Undo / rollback:
For new-only imports: easy — DELETE FROM entities WHERE source_import_id = ?.
For upsert imports: hard — you''d need to store the previous state of each updated row. Most products don''t support undo for upsert; document this.
Don''t:
- Rely on row-order for idempotency (CSVs aren''t ordered guarantees)
- Make undo destructive without confirmation (a careless click could erase real data)
Output:
- The match-key choice per entity type
- The re-import flow UI
- The "download failed rows" endpoint
- The undo policy per entity type
The single biggest support-load reduction: **the "download failed rows" button.** A customer who can fix 47 broken rows in their original spreadsheet and re-upload is self-served. A customer who has to re-run the whole 50K-row file from scratch is filing a ticket.
---
## 7. Build the Results UI
After the import, the customer needs to see what happened. Make this UI persistent and shareable.
Design the import-results UI.
The pattern:
A page at /imports/[import-id] (linkable, shareable) that shows:
Summary card:
- Status badge (Completed / Failed)
- File name + size
- Duration
- Total rows: N
- Succeeded: M
- Failed: K
- Skipped: J
- Imported by: [user]
- Date: [timestamp]
Tabs:
- All rows (default; paginated)
- Failed rows (with error messages)
- Succeeded rows (with links to created/updated records)
- Skipped rows (duplicates etc.)
Per-row view:
- Row number (matches customer''s spreadsheet)
- Status icon (green check / red X / yellow skip)
- The original raw data (for context)
- The resulting entity (if succeeded; clickable link)
- The error message (if failed; specific and human-readable)
Actions:
- "Download failed rows as CSV" — for customer to fix and re-upload
- "Download all rows as CSV" — for record-keeping
- "Retry failed rows" — generates a new import auto-populated with the failed rows
- "Undo this import" — if undo is supported for this entity type
- "Notify support" — pre-fills a support form with the import context (lower-friction than a generic ticket)
Permissions:
Only the user who started the import + workspace admins can see the results page.
Retention:
Keep import results for at least 90 days for paying customers; 30 days for free. Useful for both customer retrospection and audit.
Don''t:
- Show only summary numbers (customers want row-level detail)
- Hide failed rows behind a click (surface them prominently)
- Use vague status messages ("Processing complete" tells nothing — "53 rows imported, 7 failed; click to see details" tells everything)
Output:
- The results page component
- The CSV-download endpoints
- The retry / undo actions
- The retention policy
- The permission check
The single most-used part of the results UI: **the "download failed rows" CSV.** Customers fix locally and re-upload faster than they''d ever fix in a web UI. Embrace that workflow.
---
## 8. Handle Edge Cases
Real-world CSVs surface edge cases your tests didn''t. Plan for them.
The edge-case checklist.
Edge case 1: BOM-prefixed files
- Excel often saves CSVs with a UTF-8 byte-order mark
- Naive parsers see the BOM as part of the first column header
- Strip BOM during parse
Edge case 2: Mixed quoting
- Some rows quote fields ("Smith, John"); some don''t
- Use a real CSV parser (Papa Parse, fast-csv); never split on commas
Edge case 3: Embedded newlines
- A field can contain
\nif quoted - Naive line-based parsers split mid-field; use a stateful parser
Edge case 4: Inconsistent column counts per row
- A row has more columns than the header suggests (extra commas in unquoted text)
- Skip the row; flag as malformed; provide row number
Edge case 5: Massive files
- Customer uploads 5GB CSV
- Reject above documented limit; suggest API upload or splitting
Edge case 6: Excel exports with formatting
- "$1,000.00" appears as a string, not a number
- "1/15/26" can be 2026 or 1926 depending on Excel version
- Type-coerce thoughtfully; document expected formats
Edge case 7: Encoding hell
- Latin-1, UTF-16, Windows-1252 all real
- Detect encoding; convert to UTF-8 before processing
- If detection fails, prompt customer to specify
Edge case 8: Customer uploads to wrong importer
- They run "import contacts" on a deals CSV
- Detect: required columns missing
- Fail fast with a clear message before processing
Edge case 9: Resumable uploads
- Customer''s connection drops mid-upload
- Use multipart upload with resume (S3 / R2 support this)
Edge case 10: Massive validation errors
- 50K rows, 50K errors (everything was wrong)
- Cap stored errors; show "first 1000; download full CSV for the rest"
- Likely a wrong-file scenario; surface that hypothesis
Output:
- The edge-case test suite
- The parser configuration choices
- The error responses for each edge case
- The customer-facing docs covering these
---
## 9. Document for Customers
Customers won''t use what they can''t figure out. Docs are part of the v1 ship.
Help me draft the customer-facing CSV import documentation.
Sections:
Overview
- What can be imported (the entity types you support)
- What can''t (so they don''t waste time)
- Where to find the importer in the UI
File format requirements
- CSV / TSV / XLSX accepted
- Recommended encoding: UTF-8
- Delimiter detection notes
- Date format expectations (with examples)
- Header row required
Required and optional columns (per entity type)
- A table: column name, type, required?, example
- Common synonyms the importer auto-detects
Step-by-step walkthrough
- Upload → preview → mapping → review → start
- Screenshots of each step
Re-importing
- How to fix failed rows
- How to re-import
- How upsert works (if applicable)
FAQ
- Why did my file not parse? (encoding, delimiter, etc.)
- Why are dates wrong after import? (format expectations)
- Can I undo an import? (per-entity policy)
- What''s the maximum file size?
- How long does an import take?
Sample CSV files
- Provide a downloadable example for each entity type
- Customer can fill it in for first-time success
Output:
- The docs page structure
- Sample CSVs per entity type
- The FAQ
- The walkthrough with screenshots
The single biggest predictor of import success: **a downloadable sample CSV.** A customer who downloads a working example, fills in their data, and uploads succeeds the first time. A customer who guesses at the format from prose docs fails on the first try.
---
## 10. Quarterly Review
Imports rot. New entity fields get added; old validation rules go stale; customer pain shifts. Quarterly review keeps the importer healthy.
Quarterly review checklist.
Health metrics:
- Total imports in the period
- Success rate (rows succeeded / total rows attempted)
- Common error codes (what fails most?)
- Average file size and row count
- p50 / p95 import duration
Customer impact review:
- Top 5 import-related support tickets — what pattern?
- Any churn citing import limitations?
- What new entity types have customers asked to import?
System health:
- Are large imports stable? (Any worker crashes mid-import in the period?)
- Are there hung imports stuck in "processing" forever? (Cleanup logic working?)
- Storage usage: how many import files are retained?
Schema drift:
- New fields added to entities since last review — are they importable?
- Removed fields — do old saved column mappings break?
Output:
- Health snapshot
- 3 fixes to ship next quarter
- 1 entity type to add (or NOT to add)
- 1 validation rule to relax (the most-frustrating one)
---
## What "Done" Looks Like
A working CSV import system in 2026 has:
- **1-3 importers shipped**, each for a specific entity type customers actually want
- **Async processing** with progress, never sync-blocking
- **Forgiving column mapper** with fuzzy auto-match and saved mappings
- **Row-level validation** with human-readable errors
- **Idempotent or upsert semantics** clearly documented
- **Re-import of failed rows** as a single click
- **Persistent results UI** with download options
- **Customer docs** with sample CSVs and FAQ
- **Audit logging** of every import
- **Quarterly review** baked into the team rhythm
The hidden cost in CSV imports isn''t building the parser — it's **the slow accumulation of customer-specific quirks** (Excel exports with $-formatting, EU date format, semicolon delimiters from German exports, BOM-prefixed UTF-8). Build for the messy real-world from day one; don''t ship a strict parser and patch quirks reactively. The 80% rule: 80% of customer files have at least one quirk that strict parsing rejects. Forgiving from the start beats restrictive forever.
---
## See Also
- [Public API](public-api-chat.md) — APIs are the alternative for technical users; CSV is for everyone else
- [Audit Logs](audit-logs-chat.md) — every import is a high-value audit event
- [Background Jobs Providers](https://www.vibereference.com/backend-and-data/background-jobs-providers) — async processing depends on this
- [File Storage Providers](https://www.vibereference.com/cloud-and-hosting/file-storage-providers) — original files persist here
- [Notification Providers](https://www.vibereference.com/backend-and-data/notification-providers) — completion notifications go here
- [Multi-Tenant Data Isolation](multi-tenancy-chat.md) — imports are workspace-scoped
- [Roles & Permissions (RBAC)](roles-permissions-chat.md) — who can import? require admin / member role
- [Onboarding Email Sequence](onboarding-email-sequence-chat.md) — first-time imports often happen during onboarding
[⬅️ Growth Overview](README.md)