Need help with your JSON?
Try our JSON Formatter tool to automatically identify and fix syntax errors in your JSON. JSON Formatter tool
Building Export Functionality for Multiple Formats
If your app already works with structured JSON, export is not just a download button. It is a data contract: one canonical dataset, a clear transformation for each output format, and a delivery path that still works when files get large. This article focuses on the practical implementation choices behind JSON, CSV, XLSX, and PDF exports.
Start with an Export Contract
Before writing serializers, decide what users are actually exporting. For a JSON formatter or any structured data tool, that usually means separating the raw data from the presentation layer.
- Canonical payload: Keep one source object for export, rather than rebuilding data separately for CSV, PDF, and JSON.
- Raw vs. flattened output: JSON should preserve nesting, while CSV/XLSX usually need a tabular representation with stable columns.
- Filenames and versioning: Include the dataset name and date, such as
orders-2026-03-11.csv, and add a schema version if downstream systems depend on it. - Permissions: Export authorization should match what the user can see, especially for admin reports and account data.
- Sync vs. async: Small files can download immediately; large reports often need a queued export job and a later download link.
Which Formats Are Worth Offering?
Most products do not need every possible format. Ship the formats that map to real user workflows instead of adding options just because the format exists.
- JSON: Best when users need the original structure, nested objects, and machine-readable data for APIs or automation.
- CSV: Best for spreadsheets and simple imports. Use it when the export can be represented as rows and columns.
- XLSX: Prefer this over CSV when users need multiple sheets, typed cells, formulas, frozen headers, or better spreadsheet fidelity.
- PDF: Good for printing, approvals, and fixed-layout sharing. It is a presentation export, not the best raw data format.
- NDJSON: Useful when exporting logs or event streams because each line is an independent JSON object and can be processed incrementally.
XML is still valid for enterprise integrations, but it should usually be driven by a specific partner or legacy requirement rather than offered by default.
Choose Client-Side or Server-Side Export
The correct implementation path depends more on dataset size and format complexity than on personal preference.
Client-side export works well when:
- The user is exporting data that is already loaded in the browser.
- The file is small enough to fit comfortably in memory.
- You are generating JSON, CSV, or a small text-based report.
- You want the feature to keep working in an offline-capable web app.
Server-side export is safer when:
- The export requires database access beyond what is visible on screen.
- The file may be large enough to stress the browser.
- You need reliable PDF rendering or full XLSX generation.
- The export should run in the background and finish later.
Use One Download Helper in the Browser
Even if each format has its own serializer, the browser download step should usually be centralized in one helper. That keeps your UI code simple and makes format additions less error-prone.
Browser download helper:
function downloadBlob(filename, mimeType, contents) {
const blob =
contents instanceof Blob
? contents
: new Blob([contents], { type: mimeType });
const url = URL.createObjectURL(blob);
const link = document.createElement("a");
link.href = url;
link.download = filename;
link.rel = "noopener";
link.click();
queueMicrotask(() => URL.revokeObjectURL(url));
}
// Example:
// downloadBlob("users.json", "application/json;charset=utf-8", jsonText);
This is enough for most JSON and CSV downloads. Save format-specific complexity for the serializer, not the button component.
JSON Export Should Preserve Structure
JSON is the easiest format to get right because it matches the original data model. Offer pretty-printed JSON for humans and minified JSON when file size matters. If the export is a stream of independent records, consider newline-delimited JSON as a second option instead of forcing everything into one large array.
JSON export example:
function exportJson(filename, data, { pretty = true } = {}) {
const json = JSON.stringify(data, null, pretty ? 2 : 0);
downloadBlob(filename, "application/json;charset=utf-8", json);
}
function exportNdjson(filename, records) {
const body = records.map((record) => JSON.stringify(record)).join("\n");
downloadBlob(filename, "text/plain;charset=utf-8", body);
}
Flatten Nested Data Before CSV or XLSX
The hardest part of building export functionality for multiple formats is usually not file generation. It is deciding how nested JSON becomes a table. Pick one strategy and document it so users do not get surprised by missing or reshaped data.
- Dot-path columns: Convert nested properties to keys like
user.emailoraddress.city. - Array stringification: Store arrays as JSON strings when preserving one row per record matters more than spreadsheet friendliness.
- Row expansion: Duplicate parent fields when each array item should become its own row.
- Multi-sheet XLSX: Put parent and child collections on separate sheets when users need both readability and relational detail.
Simple object flattener:
function flattenRecord(input, prefix = "", output = {}) {
if (Array.isArray(input)) {
output[prefix] = JSON.stringify(input);
return output;
}
if (input && typeof input === "object") {
for (const [key, value] of Object.entries(input)) {
const nextKey = prefix ? `${prefix}.${key}` : key;
flattenRecord(value, nextKey, output);
}
return output;
}
output[prefix] = input ?? "";
return output;
}
// flattenRecord({ user: { email: "a@b.com" }, tags: ["red", "blue"] })
// => { "user.email": "a@b.com", "tags": "[\"red\",\"blue\"]" }
CSV Needs More Care Than It Looks
CSV is deceptively simple. In practice you need correct quoting, consistent line endings, predictable column order, and a decision about how to handle spreadsheet formula injection. If humans mainly open the file in Excel or Google Sheets, that safety decision matters as much as escaping commas.
Safer CSV serializer:
const spreadsheetFormulaPrefix = /^[=+\-@]/;
function toSpreadsheetCell(value, mode = "spreadsheet") {
const text = value == null ? "" : String(value);
if (mode === "spreadsheet" && spreadsheetFormulaPrefix.test(text)) {
return `\t${text}`;
}
return text;
}
function escapeCsvCell(value, mode) {
const text = toSpreadsheetCell(value, mode).replace(/"/g, '""');
return /[",\r\n]/.test(text) ? `"${text}"` : text;
}
function serializeCsv(rows, mode = "spreadsheet") {
const headers = [...new Set(rows.flatMap((row) => Object.keys(row)))];
const lines = [
headers.map((header) => escapeCsvCell(header, "data")).join(","),
...rows.map((row) =>
headers.map((header) => escapeCsvCell(row[header], mode)).join(",")
),
];
return lines.join("\r\n");
}
Use text/csv; charset=utf-8 when serving CSV. If spreadsheet compatibility matters, many teams also prepend a UTF-8 BOM before download so Excel opens non-ASCII text cleanly.
There is no universal CSV injection mitigation that fits every product. Prefixing dangerous cells with a tab or similar character can protect spreadsheet users, but it also changes the exported value. If the file is meant for machine import, validation or explicit rejection of dangerous leading characters may be the better choice.
Stream Large Exports on the Server
Once exports get big, generating the whole file in memory becomes wasteful. A server route can stream rows as they are produced and still return a normal file download to the browser.
Next.js route example for streaming CSV:
// app/api/export-users/route.ts
const spreadsheetFormulaPrefix = /^[=+\-@]/;
function escapeCsvCell(value) {
const text = value == null ? "" : String(value);
const safe = spreadsheetFormulaPrefix.test(text) ? `\t${text}` : text;
const escaped = safe.replace(/"/g, '""');
return /[",\r\n]/.test(escaped) ? `"${escaped}"` : escaped;
}
const encodeRow = (values) =>
values.map((value) => escapeCsvCell(value)).join(",") + "\r\n";
export async function GET() {
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
controller.enqueue(encoder.encode(encodeRow(["id", "email", "plan"])));
for await (const user of streamUsersFromDatabase()) {
controller.enqueue(
encoder.encode(encodeRow([user.id, user.email, user.plan]))
);
}
controller.close();
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/csv; charset=utf-8",
"Content-Disposition": 'attachment; filename="users-export.csv"',
"Cache-Control": "no-store",
},
});
}
This pattern is a better fit than client-side export when the data is large, permissioned, or fetched a page at a time from the database.
Use the File System Access API as Progressive Enhancement
Modern Chromium-based browsers can let users choose an exact save location and overwrite an existing file with the File System Access API. That is useful for power-user workflows, but it is not a universal replacement for regular downloads because browser support is still limited.
Progressive save flow:
async function saveExport({ filename, mimeType, contents }) {
if (!("showSaveFilePicker" in window)) {
downloadBlob(filename, mimeType, contents);
return;
}
const extension = filename.split(".").pop();
const handle = await window.showSaveFilePicker({
suggestedName: filename,
types: [
{
description: "Export file",
accept: { [mimeType]: [`.${extension}`] },
},
],
});
const writable = await handle.createWritable();
await writable.write(contents);
await writable.close();
}
Call this from a direct user action such as a button click, use it only on HTTPS pages, and keep the regular download fallback for browsers that do not support the API.
PDF and XLSX Are Usually Server Concerns
Teams often underestimate how different PDF and XLSX are from JSON and CSV. If the output needs layout, typography, multiple sheets, formulas, or exact print rendering, the server usually gives you more predictable results than browser-only generation.
- PDF: Treat it like a report template. Separate the printable layout from the JSON source data so you can change the design without breaking the export contract.
- XLSX: Use it when you need typed dates, number formats, frozen header rows, formulas, or multiple related tables in one file.
- Very large exports: Queue the job, store the file temporarily, and let the user download it later instead of holding an HTTP request open for minutes.
Practical Checklist
- Keep one canonical export model and write separate serializers for each format.
- Document how nested JSON becomes rows, columns, or multiple sheets.
- Use stable column order so repeated exports compare cleanly.
- Set correct response headers, especially MIME type and
Content-Disposition. - Decide explicitly how you will handle CSV formula injection.
- Prefer server-side export for large files, PDF generation, and rich XLSX output.
- Offer a normal download fallback even if you add a File System Access save flow.
- Test with non-ASCII text, embedded quotes, commas, line breaks, and large arrays.
Conclusion
Building export functionality for multiple formats goes well when you treat it as a pipeline: canonical data first, a deliberate transformation for each format second, and the delivery mechanism last. JSON preserves the source of truth, CSV and XLSX serve spreadsheet workflows, PDF serves presentation, and server-side streaming keeps large exports reliable. That combination is far more useful than a long list of format buttons with vague behavior behind them.
Need help with your JSON?
Try our JSON Formatter tool to automatically identify and fix syntax errors in your JSON. JSON Formatter tool