Need help with your JSON?
Try our JSON Formatter tool to automatically identify and fix syntax errors in your JSON. JSON Formatter tool
Secure Code Review for JSON Parsing Libraries
What A Secure Review Should Prove
A strong review is not just "does parsing fail cleanly?" It should prove that untrusted JSON is handled with predictable parser behavior, bounded resource usage, strict post-parse validation, and safe downstream use. That matters because current interoperability guidance still calls out edge cases many teams miss: duplicate object names are ambiguous, parsers may impose their own limits, and exact integers above 2^53 - 1 are not reliably portable across common JSON stacks.
- Reject or explicitly define ambiguous input such as duplicate keys and non-standard syntax.
- Bound cost before business logic with request-size, nesting-depth, and collection-size limits.
- Validate structure and allowed fields after parsing instead of trusting parser success.
- Review how parsed values reach logs, templates, queries, merges, and authorization logic.
The standards worth checking against are RFC 8259 for JSON itself, RFC 7493 (I-JSON) for safer interoperability rules, and OWASP Input Validation guidance for application-layer validation.
Fast Review Workflow
- Map every trust boundary where JSON enters the system: HTTP bodies, queues, cache blobs, config files, and third-party webhooks.
- Identify the exact parser and mode in use. Reviewers should flag permissive modes that allow comments, trailing commas,
NaN,Infinity, or other non-standard extensions at a security boundary. - Check pre-parse and parse-time limits. Request size, depth, key count, string length, and total element count should be bounded somewhere explicit.
- Inspect post-parse validation. The code should reject unknown fields, wrong types, and out-of-range values before any authorization, persistence, or templating step.
- Review maintenance signals for third-party libraries: recent releases, security advisories, supported versions, and tests for malformed inputs.
High-Risk Findings Reviewers Commonly Miss
1. Duplicate Keys And Other Ambiguous Input
RFC 8259 notes that duplicate member names make behavior unpredictable because implementations differ. In JavaScript, plain object materialization typically keeps the last value, which means a schema validator running after JSON.parse may never see the discarded key. If duplicate names could change meaning, privilege, routing, or signing behavior, block the change until the boundary rejects them.
- Ask whether the parser can reject duplicates before building a normal object.
- Make sure non-standard JSON features are off unless the protocol explicitly requires them.
- Document exact behavior when interoperability with other languages or services matters.
2. Numbers, Unicode, And Cross-Language Drift
I-JSON recommends avoiding exact numeric values outside the IEEE 754 safe integer range. A secure review should verify whether identifiers, timestamps, counters, or monetary values can overflow or lose precision when parsed by JavaScript or mixed-language systems. The same review should check how invalid Unicode sequences are handled and whether the library accepts malformed escapes or surrogate pairs.
- Encode very large integers as strings if exact value matters across systems.
- Verify whether the parser is strict about invalid escape sequences and malformed Unicode.
- Review comparisons, hashing, and signatures that depend on exact textual or numeric representation.
3. Resource Exhaustion And Parser Cost
RFC 8259 explicitly allows implementations to limit input size, nesting depth, number range, and string contents. Reviewers should expect those limits to be intentional, not accidental. Large arrays, deep nesting, and gigantic strings can still turn safe parsing code into a denial-of-service issue.
- Enforce request-body limits before parsing when possible.
- Check depth and structure size limits if the parser does not provide them natively.
- Prefer streaming parsers for very large or unbounded inputs instead of loading everything at once.
4. Validation Gaps After Parsing
Parsing only proves the input is syntactically valid JSON. OWASP guidance still applies: validate as early as possible, and validate semantics, not just syntax. The review should confirm allowed fields, required fields, type constraints, length limits, formats, and business rules.
- Reject unexpected fields instead of silently ignoring them.
- Do not merge parsed objects into live config or defaults without key allowlists.
- Treat
__proto__,constructor, andprototypeas dangerous when recursively merging or cloning parsed objects.
5. Error Handling, Logging, And Observability
Parsing failures should be easy for operators to debug but unhelpful to attackers. The review should ensure clients receive generic parse errors while internal logs capture enough context to investigate malformed payloads, rate spikes, or recurring attack patterns.
- Return a generic
400or422response for invalid JSON. - Avoid reflecting raw payloads into logs when they may contain secrets or attacker-controlled content.
- Record limit breaches separately from ordinary syntax failures so abuse is visible.
What To Inspect In The Parser Library Itself
If the review includes approving or upgrading a third-party parser, focus less on marketing and more on behavior, defaults, and maintenance discipline.
- Strictness defaults: Does the library default to standard JSON, or does it silently enable comments, trailing commas, or permissive number parsing?
- Duplicate-key policy: Is behavior documented, configurable, and test-covered?
- Limits: Are depth, token, string, object-member, and total-input limits available and documented?
- Unicode and number handling: Are malformed escapes rejected, and is numeric precision behavior clear for big integers and exponents?
- Security process: Check recent releases, advisories, issue response time, and whether malformed-input tests or fuzz cases are visible in the repository.
- Dependency surface: Fewer transitive dependencies usually means less supply-chain risk and a smaller review burden.
Modern Next.js Review Example
In current Next.js applications, the practical review target is usually an App Router route handler such as app/api/example/route.ts. The example below keeps the review points explicit: raw-size check, parse failure handling, depth control, strict shape validation, and a note about duplicate keys and large integers.
type CreateUserPayload = {
id: string;
email: string;
role: "viewer" | "editor";
};
const MAX_BODY_BYTES = 64 * 1024;
const MAX_DEPTH = 20;
export async function POST(request: Request) {
const raw = await request.text();
if (new TextEncoder().encode(raw).length > MAX_BODY_BYTES) {
return Response.json({ error: "Request body too large" }, { status: 413 });
}
let parsed: unknown;
try {
parsed = JSON.parse(raw);
} catch {
return Response.json({ error: "Invalid JSON" }, { status: 400 });
}
if (getDepth(parsed) > MAX_DEPTH) {
return Response.json({ error: "JSON is too deeply nested" }, { status: 400 });
}
if (!isCreateUserPayload(parsed)) {
return Response.json({ error: "JSON shape is not allowed" }, { status: 422 });
}
// If duplicate-key rejection or exact big-integer handling matters,
// enforce that before materializing into plain JS objects.
return Response.json({ ok: true });
}
function isCreateUserPayload(value: unknown): value is CreateUserPayload {
if (!value || typeof value !== "object" || Array.isArray(value)) return false;
const record = value as Record<string, unknown>;
const keys = Object.keys(record);
if (keys.length !== 3) return false;
if (!keys.every((key) => ["id", "email", "role"].includes(key))) return false;
return (
typeof record.id === "string" &&
typeof record.email === "string" &&
(record.role === "viewer" || record.role === "editor")
);
}
function getDepth(value: unknown, depth = 0): number {
if (!value || typeof value !== "object") return depth;
const entries = Array.isArray(value) ? value : Object.values(value);
return entries.reduce((max, entry) => Math.max(max, getDepth(entry, depth + 1)), depth);
}If you do not need raw body inspection, await request.json() is the standard Next.js path. For security reviews, the important part is not the helper you choose but whether size limits, strict validation, and ambiguous-input handling are explicit.
Block Approval If You See Any Of These
- The code accepts non-standard JSON at a public trust boundary without a documented reason.
- Duplicate keys can change meaning, but the boundary never rejects or normalizes them explicitly.
- No request-size or structure-size limits exist for untrusted JSON.
- Validation only checks a few fields while silently allowing unexpected properties.
- Parsed objects are recursively merged into defaults, config, or auth context without key allowlists.
- Large integer handling is unspecified even though IDs or financial values require exactness.
- Raw parser errors or attacker-controlled payloads are exposed to clients or unsafe logs.
- The selected parser has weak maintenance signals or unresolved security advisories.
Need help with your JSON?
Try our JSON Formatter tool to automatically identify and fix syntax errors in your JSON. JSON Formatter tool