Need help with your JSON?
Try our JSON Formatter tool to automatically identify and fix syntax errors in your JSON. JSON Formatter tool
Creating Test Scripts for JSON Formatter CLI Tools
Command Line Interface (CLI) tools are powerful utilities for developers, enabling automation and integration into workflows. A common task is formatting data, and JSON formatters are particularly useful for pretty-printing, minimizing, or standardizing JSON output. Ensuring these tools work correctly and consistently across various inputs is crucial. This article explores how to create effective test scripts for your JSON formatter CLI tool.
Why Test Your CLI Formatter?
Testing CLI tools, especially formatters, is essential for several reasons:
- Correctness: Verify that the tool produces valid JSON output according to the specification, and that formatting rules (indentation, spacing, key order, etc.) are applied as expected.
- Consistency: Ensure the same input consistently produces the same output.
- Robustness: Check how the tool handles edge cases and invalid inputs without crashing or producing incorrect results.
- Regression Prevention: Catch bugs introduced when adding new features or refactoring code.
- Documentation: Test cases can serve as executable examples of how the tool is supposed to behave.
Types of Test Cases
A comprehensive test suite should cover various scenarios. Here are some categories of test cases:
Happy Path / Basic Functionality
These tests cover typical, valid JSON inputs and verify that the tool produces the expected formatted output.
- Simple objects and arrays.
- Nested structures.
- Various data types (strings, numbers, booleans, null).
- JSON with comments (if your tool supports/ignores them).
Edge Cases
Test inputs that might reveal subtle bugs or unexpected behavior.
- Empty objects {} and arrays [].
- JSON with large numbers or very long strings.
- Deeply nested JSON structures.
- JSON containing special characters or Unicode.
- JSON with unusual whitespace or line endings.
Invalid Input / Error Handling
Ensure the formatter correctly identifies invalid JSON and exits with a non-zero status code, ideally providing a helpful error message.
- Missing commas, colons, brackets, braces, or quotes.
- Trailing commas (depending on JSON standard compliance).
- Incorrectly escaped characters in strings.
- Non-JSON content.
- Empty input file or stdin.
Options Testing
If your formatter supports command-line options (e.g., indentation level, sorting keys, compact output), test each option and combinations.
--indent 2
,--indent 4
,--indent '\t'
--compact
/--minify
--sort-keys
- Combinations like
--sort-keys --indent 2
Setting up the Test Environment
A common and effective approach for testing CLI tools is using "snapshot" or "golden file" testing. You create pairs of input and expected output files.
- Input Files: Create files containing various JSON inputs (valid, invalid, edge cases, etc.).
- Expected Output Files: For each valid input file and set of options, create a corresponding file containing the *exact* output you expect the CLI tool to produce. For invalid inputs, document the expected error message or exit code.
- Test Script: Write a script (e.g., in Bash, Python, Node.js) that automates running your CLI tool with different inputs and options, captures its output (stdout and stderr), and compares it against the expected output files.
Example Test Directory Structure:
tests/ fixtures/ inputs/ simple.json nested.json empty_object.json invalid_syntax.json large_file.json expected/ simple.formatted.json nested.formatted.json nested.compact.json empty_object.formatted.json invalid_syntax.stderr test_formatter.sh # Or test_formatter.js / test_formatter.py
Writing Test Scripts (Shell Example)
Shell scripts (like Bash) are straightforward for running CLI commands and comparing file outputs using standard utilities like diff
.
Simple Bash Test Script (test_formatter.sh
):
# Assuming your formatter CLI is built and available as './dist/formatter' # Adjust the command based on your build process and tool name FORMATTER="./dist/formatter" INPUT_DIR="tests/fixtures/inputs" EXPECTED_DIR="tests/fixtures/expected" TEMP_DIR="tests/temp" # Temporary directory for actual outputs mkdir -p "$TEMP_DIR" EXIT_CODE=0 # Track overall test status run_test() { local input_file="$1" local expected_file="$2" local options="$3" local test_name="$4" local expect_error="$5" # "true" if an error is expected echo "Running test: $test_name" local actual_output="$TEMP_DIR/$(basename "${expected_file:-${input_file/.json/.actual}}")" local actual_stderr="$TEMP_DIR/$(basename "${expected_file:-${input_file/.json/.actual}}").stderr" if [ "$expect_error" = "true" ]; then # Test case expecting an error # Run the command, capture stderr, and check exit code "$FORMATTER" $options < "$input_file" > /dev/null 2> "$actual_stderr" local command_exit_code=$? if [ $command_exit_code -eq 0 ]; then echo " FAILURE: Expected error for '$input_file', but command exited with 0." EXIT_CODE=1 else # Optionally compare stderr output if expected stderr file exists if [ -f "$expected_file" ]; then diff "$actual_stderr" "$expected_file" > /dev/null if [ $? -ne 0 ]; then echo " FAILURE: Stderr output mismatch for '$input_file'." diff "$actual_stderr" "$expected_file" EXIT_CODE=1 else echo " SUCCESS (Error Handled)." fi else echo " SUCCESS (Error Handled)." # No specific stderr comparison file fi fi else # Standard test case expecting successful output # Run the command, capture stdout "$FORMATTER" $options < "$input_file" > "$actual_output" 2> /dev/null local command_exit_code=$? if [ $command_exit_code -ne 0 ]; then echo " FAILURE: Command exited with non-zero code $command_exit_code for '$input_file'." EXIT_CODE=1 else # Compare actual output with expected output diff "$actual_output" "$expected_file" > /dev/null if [ $? -ne 0 ]; then echo " FAILURE: Output mismatch for '$input_file'." diff "$actual_output" "$expected_file" EXIT_CODE=1 else echo " SUCCESS." fi fi fi } # --- Run Tests --- # Happy Path run_test "$INPUT_DIR/simple.json" "$EXPECTED_DIR/simple.formatted.json" "" "Simple Formatting" run_test "$INPUT_DIR/nested.json" "$EXPECTED_DIR/nested.formatted.json" "" "Nested Formatting" # Edge Cases run_test "$INPUT_DIR/empty_object.json" "$EXPECTED_DIR/empty_object.formatted.json" "" "Empty Object" # Invalid Input run_test "$INPUT_DIR/invalid_syntax.json" "$EXPECTED_DIR/invalid_syntax.stderr" "" "Invalid Syntax" "true" # Options Testing run_test "$INPUT_DIR/nested.json" "$EXPECTED_DIR/nested.compact.json" "--compact" "Nested Compact" # Add more tests for different options and inputs # --- Summary --- echo "" if [ $EXIT_CODE -eq 0 ]; then echo "All tests passed!" else echo "Some tests failed!" fi # Clean up temporary files # rm -rf "$TEMP_DIR" # Keep temp files for debugging if tests fail exit $EXIT_CODE
This script defines a helper function run_test
to encapsulate the logic for running the formatter, capturing output, and comparing. It handles both successful runs (checking stdout) and expected failures (checking exit code and optionally stderr). The diff
command is a standard Unix utility that shows line-by-line differences between files.
Writing Test Scripts (Node.js Example)
If your CLI tool is built with Node.js, you might prefer writing tests in Node.js itself. This allows for more programmatic control and potentially more detailed comparisons.
Simple Node.js Test Script (test_formatter.js
):
const { spawn } = require('child_process'); const { readFile } = require('fs/promises'); const path = require('path'); const FORMATTER = path.join(__dirname, '../dist/formatter'); // Adjust path const INPUT_DIR = path.join(__dirname, './fixtures/inputs'); const EXPECTED_DIR = path.join(__dirname, './fixtures/expected'); let testsFailed = 0; async function runTest(inputFile, expectedFile, options = [], testName, expectError = false) { console.log('Running test: ' + testName); const inputPath = path.join(INPUT_DIR, inputFile); const expectedPath = path.join(EXPECTED_DIR, expectedFile); try { // Read input file content const inputContent = await readFile(inputPath, 'utf-8'); // Spawn the formatter process const formatterProcess = spawn(FORMATTER, options, { stdio: ['pipe', 'pipe', 'pipe'] }); let stdout = ''; let stderr = ''; formatterProcess.stdout.on('data', (data) => { stdout += data.toString(); }); formatterProcess.stderr.on('data', (data) => { stderr += data.toString(); }); // Write input content to stdin of the formatter process formatterProcess.stdin.write(inputContent); formatterProcess.stdin.end(); const exitCode = await new Promise((resolve, reject) => { formatterProcess.on('error', reject); formatterProcess.on('close', resolve); }); if (expectError) { // Test case expecting an error if (exitCode === 0) { console.error(' FAILURE: Expected error for \'' + inputFile + '\', but command exited with 0.'); testsFailed++; } else { // Optionally compare stderr output if expected stderr file exists if (await fileExists(expectedPath)) { const expectedStderr = await readFile(expectedPath, 'utf-8'); // Basic comparison, might need trimming or more robust diffing if (stderr.trim() !== expectedStderr.trim()) { console.error(' FAILURE: Stderr output mismatch for \'' + inputFile + '\'.'); console.log('--- Expected Stderr ---'); console.log(expectedStderr); console.log('--- Actual Stderr ---'); console.log(stderr); console.log('---------------------'); testsFailed++; } else { console.log(' SUCCESS (Error Handled).'); } } else { console.log(' SUCCESS (Error Handled).'); // No specific stderr comparison file } } } else { // Standard test case expecting successful output if (exitCode !== 0) { console.error(' FAILURE: Command exited with non-zero code ' + exitCode + ' for \'' + inputFile + '\'. Stderr:\n' + stderr); testsFailed++; } else { const expectedOutput = await readFile(expectedPath, 'utf-8'); // Basic comparison, might need trimming or more robust diffing if (stdout.trim() !== expectedOutput.trim()) { console.error(' FAILURE: Output mismatch for \'' + inputFile + '\'.'); console.log('--- Expected Output ---'); console.log(expectedOutput); console.log('--- Actual Output ---'); console.log(stdout); console.log('---------------------'); testsFailed++; } else { console.log(' SUCCESS.'); } } } } catch (error) { console.error(' FAILURE: Error running test for \'' + inputFile + '\': ' + error.message); testsFailed++; } } // Helper to check if a file exists async function fileExists(filePath) { try { await readFile(filePath); return true; } catch (e) { // Check for specific error code if needed (e.g., 'ENOENT') return false; } } // --- Run Tests --- (async () => { await runTest('simple.json', 'simple.formatted.json', [], 'Simple Formatting'); await runTest('nested.json', 'nested.formatted.json', [], 'Nested Formatting'); await runTest('empty_object.json', 'empty_object.formatted.json', [], 'Empty Object'); await runTest('invalid_syntax.json', 'invalid_syntax.stderr', [], 'Invalid Syntax', true); await runTest('nested.json', 'nested.compact.json', ['--compact'], 'Nested Compact'); // Add more tests here... // --- Summary --- console.log(''); if (testsFailed === 0) { console.log('All tests passed!'); } else { console.error(testsFailed + ' test(s) failed!'); process.exit(1); // Exit with non-zero code on failure } })();
This Node.js script uses child_process.spawn
to run the CLI tool and pipes input/output. It reads expected results from files and performs string comparisons. For more complex diffing, you might use a dedicated library. Added error handling for readFile
in fileExists
.
Comparing Output
The core of these tests is comparing the actual output of your CLI tool with the predefined expected output.
- Exact String Match: For simple cases or compact output, a direct string comparison might suffice. However, be mindful of trailing newlines or whitespace differences.
- Line-by-Line Diff: Tools like
diff
are excellent for pretty-printed JSON, highlighting exactly where the actual output deviates from the expected. - JSON Comparison Libraries: For more flexible comparisons (e.g., ignoring key order in pretty-printed output unless
--sort-keys
is used), you could parse both the actual and expected output JSON strings into data structures and compare the structures programmatically. Libraries exist for this purpose. However, for testing a *formatter* specifically, comparing the *formatted string* output is often the most direct way to verify the formatting rules themselves are applied correctly.
For the golden file approach, storing the expected output files in version control (like Git) is crucial. When the formatter's output changes (intentionally due to updates or bug fixes), you'll need to review the differences and update the expected files.
Running Tests
Integrate your test script into your project's development workflow:
- Run tests before committing changes.
- Include tests in your Continuous Integration (CI) pipeline to automatically check every push or pull request.
- Add a script command to your
package.json
(if using Node.js) for easy execution, e.g.,"test": "./tests/test_formatter.sh"
or"test": "node ./tests/test_formatter.js"
.
Next Steps
Start by creating tests for the basic functionality, then gradually add edge cases, error tests, and option tests. Maintaining the expected output files requires discipline, but it provides a robust safety net for your CLI tool's correctness. Consider using a testing framework if your test suite grows very large or complex, but for many CLI tools, a simple script with golden files is sufficient and easy to understand.
Need help with your JSON?
Try our JSON Formatter tool to automatically identify and fix syntax errors in your JSON. JSON Formatter tool