Need help with your JSON?

Try our JSON Formatter tool to automatically identify and fix syntax errors in your JSON. JSON Formatter tool

Using Typed Arrays for JSON Buffer Manipulation

Typed arrays help when JSON is already moving through your app as bytes rather than plain strings, such as a fetch() response, a file read, a WebSocket frame, a worker message, or a Node.js stream. In those cases, a Uint8Array is the right container for the UTF-8 bytes that represent the JSON text.

The important boundary is this: typed arrays are excellent for transport, framing, slicing, reusing buffers, and interop with binary APIs. They do not replace JSON parsing. In almost every real workflow, you still move between object -> JSON string -> UTF-8 bytes on the way out and UTF-8 bytes -> JSON string -> object on the way back in.

When Typed Arrays Are Actually Useful for JSON

  • JSON arrives from an API or socket as raw bytes and you need to decode it yourself.
  • You are packaging JSON next to binary data, such as a length prefix, checksum, or custom protocol header.
  • You want to reuse buffers and reduce extra allocations in tight loops.
  • You need the same payload to work across browser APIs, Web Workers, WebAssembly boundaries, and Node.js.

If you only need to inspect, format, or parse JSON already held in a JavaScript string, typed arrays are usually unnecessary overhead. In that case, JSON.parse(), JSON.stringify(), and a formatter are the simpler tools.

JSON on the Wire Is UTF-8 Bytes

JSON is text, but network and file APIs move it around as bytes. The current JSON standard, RFC 8259, says that JSON exchanged between systems outside a closed ecosystem must be encoded as UTF-8. It also says senders must not prepend a byte order mark (BOM) to network JSON.

That matters because JavaScript string length is not the same thing as UTF-8 byte length. For JSON containing non-ASCII characters, the byte count can be larger than json.length.

const json = JSON.stringify({ city: "Minsk", currency: "€" });
const encoder = new TextEncoder();
const bytes = encoder.encode(json);

console.log(json.length);  // UTF-16 code units, not UTF-8 bytes
console.log(bytes.length); // Actual byte size on the wire

The WHATWG Encoding Standard defines TextEncoder as UTF-8 only, which matches how JSON should be transmitted.

Encode JSON into a Uint8Array

The default outbound path is straightforward: stringify first, then encode the string into a byte buffer.

const payload = {
  id: 123,
  name: "Test Item",
  tags: ["json", "typed-array"]
};

const jsonString = JSON.stringify(payload);
const encoder = new TextEncoder();
const jsonBytes = encoder.encode(jsonString);

console.log(jsonBytes instanceof Uint8Array); // true
console.log(jsonBytes.byteLength);            // number of UTF-8 bytes

Use this when an API expects bytes, for example WebSocket.send(), postMessage(), a file API, or a Node.js socket or stream.

Reuse a Destination Buffer with encodeInto()

When you already have a reusable output buffer, TextEncoder.encodeInto() can avoid an extra allocation. The Encoding Standard defines it to return how many source code units were read and how many bytes were written.

const payload = { ok: true, message: "Hello €" };
const jsonString = JSON.stringify(payload);
const target = new Uint8Array(1024);

const encoder = new TextEncoder();
const { read, written } = encoder.encodeInto(jsonString, target);

if (read !== jsonString.length) {
  throw new Error("Target buffer is too small");
}

const exactBytes = target.subarray(0, written);
console.log(exactBytes.byteLength);

This is most useful in high-throughput or allocation-sensitive code. The detail to remember is that written is measured in bytes, while read is measured against the source string.

Decode Bytes Back into JSON Safely

The reverse path is decode first, parse second. If the bytes are supposed to be valid UTF-8 JSON, it is often better to fail loudly on bad input instead of silently replacing broken byte sequences.

const decoder = new TextDecoder("utf-8", { fatal: true });

const jsonString = decoder.decode(jsonBytes);
const payload = JSON.parse(jsonString);

console.log(payload);

With fatal: true, invalid UTF-8 throws a TypeError during decoding. Without it, bad input is typically replaced with the Unicode replacement character, which can hide corruption until later.

Buffer Manipulation That Actually Makes Sense

The strongest use case for typed arrays is usually not "editing JSON bytes directly". It is manipulating the surrounding buffer layout: prepending a header, extracting a payload, concatenating chunks, or copying the JSON bytes into a larger binary message.

const payload = { type: "event", ok: true };
const jsonBytes = new TextEncoder().encode(JSON.stringify(payload));

// 4-byte big-endian length prefix + JSON payload
const packet = new Uint8Array(4 + jsonBytes.length);
const header = new DataView(packet.buffer, packet.byteOffset, packet.byteLength);

header.setUint32(0, jsonBytes.length, false);
packet.set(jsonBytes, 4);

// Read it back
const length = header.getUint32(0, false);
const body = packet.subarray(4, 4 + length);
const decoded = JSON.parse(new TextDecoder().decode(body));

This kind of framing is exactly where Uint8Array shines: the JSON stays standard, while the surrounding protocol remains binary and efficient.

Handling Chunked Input Without Corrupting Characters

If JSON arrives in multiple chunks, decode incrementally so multi-byte UTF-8 characters are preserved across chunk boundaries. Only call JSON.parse() once you have the complete JSON text.

const decoder = new TextDecoder("utf-8", { fatal: true });
let jsonText = "";

for (const chunk of incomingChunks) {
  jsonText += decoder.decode(chunk, { stream: true });
}

jsonText += decoder.decode(); // flush the final partial code point
const payload = JSON.parse(jsonText);

This streaming decode pattern is useful for chunked fetch bodies, sockets, and custom transports. If your stream is newline-delimited JSON rather than one complete JSON document, split on newlines and parse record by record instead.

Common Mistakes and Caveats

  • In-place edits are fragile: changing a value from 1 to 1000 or from "a" to "longer text" changes the byte length and shifts the rest of the document. In most cases, it is safer to parse, modify the object, and re-serialize.
  • Byte length and string length are different: if buffer size matters, measure the encoded bytes, not string.length.
  • Typed arrays do not parse JSON: access to bytes does not remove the need for JSON.parse(). Byte-level parsing of full JSON grammar is complex and rarely worth doing by hand.
  • Node.js has one sharp edge: in Node.js, the Buffer class extends Uint8Array, but Buffer.slice() returns a view while ordinary typed array slice() returns a copy. Prefer subarray() when you want consistent cross-runtime behavior.
  • Manual decoding is optional: if you simply want parsed JSON from a normal HTTP response, response.json() is usually clearer than response.arrayBuffer() plus manual decoding.

Browser and Node.js Interoperability Notes

The browser-friendly type for JSON bytes is Uint8Array. In Node.js, Buffer is a subclass of Uint8Array, and the current Node.js docs explicitly note that many APIs accept plain Uint8Array instances wherever Buffer is supported. That means you can often keep one byte-oriented code path across environments.

When you need Node-specific helpers, you can still convert freely between the two. The main reason to stay with Uint8Array in shared code is predictability: it matches the standard web APIs and avoids accidental reliance on Buffer-only methods.

Conclusion

Using typed arrays for JSON buffer manipulation makes sense when you are solving a byte-level problem, not a formatting problem. Treat JSON as UTF-8, use TextEncoder and TextDecoder for the boundary between text and bytes, reserve direct buffer work for framing and transport, and prefer parsing, editing, and re-serializing over risky in-place mutations. That gives you the performance and interoperability benefits of buffers without turning simple JSON handling into brittle low-level code.

Need help with your JSON?

Try our JSON Formatter tool to automatically identify and fix syntax errors in your JSON. JSON Formatter tool