Node.js Performance Diagnostics
Why Performance Matters
Node.js offers various tools and techniques for diagnosing performance issues.
This guide covers built-in tools, and popular third-party solutions, for comprehensive performance analysis.
Performance Tip: Always measure before optimizing.
Use the techniques in this guide to identify actual bottlenecks rather than guessing where performance issues might be.
Understanding Node.js Performance
Performance in Node.js applications can be affected by several factors:
- CPU-intensive operations that block the event loop
- Memory leaks and excessive garbage collection
- I/O bottlenecks (database queries, file operations, network requests)
- Inefficient code and algorithms
- Event loop congestion
Diagnosing these issues requires a methodical approach and the right tools.
Built-in Diagnostic Tools
console.time() and console.timeEnd()
The simplest way to measure how long an operation takes:
// Measure execution time
console.time('operation');
// Some operation to measure
const array = Array(1000000).fill().map((_, i) => i);
array.sort((a, b) => b - a);
console.timeEnd('operation');
// Output: operation: 123.45ms
Process Statistics
Node.js provides access to process statistics through the process
global object:
// Memory usage
const memoryUsage = process.memoryUsage();
console.log('Memory Usage:');
console.log(` RSS: ${Math.round(memoryUsage.rss / 1024 / 1024)} MB`);
console.log(` Heap Total: ${Math.round(memoryUsage.heapTotal / 1024 / 1024)} MB`);
console.log(` Heap Used: ${Math.round(memoryUsage.heapUsed / 1024 / 1024)} MB`);
console.log(` External: ${Math.round(memoryUsage.external / 1024 / 1024)} MB`);
// CPU usage
const startUsage = process.cpuUsage();
// Simulate CPU work
const now = Date.now();
while (Date.now() - now < 500); // Busy wait for 500ms
const endUsage = process.cpuUsage(startUsage);
console.log('CPU Usage:');
console.log(` User: ${endUsage.user / 1000}ms`);
console.log(` System: ${endUsage.system / 1000}ms`);
// Uptime
console.log(`Process uptime: ${process.uptime().toFixed(2)} seconds`);
Node.js Performance Hooks
Since Node.js 8.5.0, the perf_hooks
module provides tools for measuring performance:
const { performance, PerformanceObserver } = require('perf_hooks');
// Create a performance observer
const obs = new PerformanceObserver((items) => {
items.getEntries().forEach((entry) => {
console.log(`${entry.name}: ${entry.duration.toFixed(2)}ms`);
});
});
// Subscribe to performance events
obs.observe({ entryTypes: ['measure'] });
// Mark the beginning of an operation
performance.mark('start');
// Simulate some work
const data = [];
for (let i = 0; i < 1000000; i++) {
data.push(i * i);
}
// Mark the end and measure
performance.mark('end');
performance.measure('Data processing time', 'start', 'end');
// Clean up marks
performance.clearMarks();
Advanced CPU Profiling
1. V8 Profiler with Source Maps
For applications using TypeScript or transpiled JavaScript, source maps are essential for meaningful profiling results:
Node.js allows accessing the V8 profiler directly for CPU profiling:
const v8Profiler = require('v8-profiler-node8');
const fs = require('fs');
const path = require('path');
// Enable source map support for accurate profiling
require('source-map-support').install();
// Start CPU profiling with source map support
v8Profiler.setGenerateType(1); // Include type information
const profile = v8Profiler.startProfiling('CPU profile', true);
// Run code to profile
function fibonacci(n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
// Simulate both CPU and I/O work
function processData() {
const start = Date.now();
fibonacci(35);
console.log(`CPU work took: ${Date.now() - start}ms`);
// Simulate async work
setImmediate(() => {
const asyncStart = Date.now();
fibonacci(30);
console.log(`Async work took: ${Date.now() - asyncStart}ms`);
});
}
processData();
// Stop profiling after async work completes
setTimeout(() => {
const profile = v8Profiler.stopProfiling('CPU profile');
profile.export((error, result) => {
const filename = path.join(__dirname, 'profile.cpuprofile');
fs.writeFileSync(filename, result);
console.log(`CPU profile saved to ${filename}`);
profile.delete();
});
}, 1000);
To use the above example, you need to install the v8-profiler package:
npm install v8-profiler-node8
The generated .cpuprofile
file can be loaded in Chrome DevTools for visualization.
2. Node.js Built-in Profiling
Node.js has built-in profiling capabilities that can be accessed through command-line flags:
# Start a Node.js application with profiling enabled
node --prof app.js
# Process the generated log file
node --prof-process isolate-0xNNNNNNNN-NNNN-v8.log > processed.txt
Advanced Memory Profiling
Memory Leak Detection Tip: Compare multiple heap snapshots taken at different times to identify objects that aren't being garbage collected as expected.
Heap Snapshots with Chrome DevTools
Heap snapshots can help identify memory leaks by capturing the memory state at a specific moment:
const heapdump = require('heapdump');
const fs = require('fs');
const path = require('path');
// Generate some data that might leak
let leakyData = [];
function potentiallyLeaky() {
const data = {
id: Date.now(),
content: Array(1000).fill('potentially leaky data'),
timestamp: new Date().toISOString()
};
leakyData.push(data);
}
// Simulate a memory leak with different retention patterns
setInterval(() => {
potentiallyLeaky();
// Keep only the last 100 items to simulate a partial leak
if (leakyData.length > 100) {
leakyData = leakyData.slice(-100);
}
}, 100);
// Take heap snapshots at intervals
function takeHeapSnapshot(prefix) {
const filename = path.join(__dirname, `${prefix}-${Date.now()}.heapsnapshot`);
heapdump.writeSnapshot(filename, (err, filename) => {
if (err) {
console.error('Failed to take heap snapshot:', err);
} else {
console.log(`Heap snapshot saved to ${filename}`);
}
});
}
// Initial snapshot
takeHeapSnapshot('heap-initial');
// Take periodic snapshots
setInterval(() => {
takeHeapSnapshot('heap-periodic');
}, 10000);
// Force garbage collection before final snapshot
setTimeout(() => {
if (global.gc) {
global.gc();
console.log('Garbage collection forced');
}
takeHeapSnapshot('heap-final');
}, 30000);
To use the above example, you need to install the heapdump package:
npm install heapdump
Heap snapshots can be analyzed in Chrome DevTools to identify memory leaks.
Event Loop and Latency Analysis
Event Loop Metrics to Monitor
- Event loop lag (time between event loop ticks)
- Active handles and requests
- Pending async operations
- Garbage collection pauses
The event loop is central to Node.js performance. Blocking it causes performance degradation:
const toobusy = require('toobusy-js');
const http = require('http');
// Configure thresholds (in milliseconds)
toobusy.maxLag(100); // Maximum allowed lag before considering server too busy
toobusy.interval(500); // Check interval for event loop lag
// Create HTTP server with event loop monitoring
const server = http.createServer((req, res) => {
// Check if event loop is overloaded
if (toobusy()) {
res.statusCode = 503; // Service Unavailable
res.setHeader('Retry-After', '10');
return res.end(JSON.stringify({
error: 'Server is too busy',
message: 'Please try again later',
status: 503
}));
}
// Simulate some work based on URL
if (req.url === '/compute') {
// CPU-intensive work
let sum = 0;
for (let i = 0; i < 1e7; i++) {
sum += Math.random();
}
res.end(`Computed: ${sum}`);
} else {
// Normal response
res.end('OK');
}
});
// Add error handling
server.on('error', (err) => {
console.error('Server error:', err);
});
// Start server
const PORT = process.env.PORT || 3000;
server.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
// Monitor event loop lag and memory usage
setInterval(() => {
const lag = toobusy.lag();
const mem = process.memoryUsage();
console.log(`Event loop lag: ${lag}ms`);
console.log(`Memory usage: ${Math.round(mem.heapUsed / 1024 / 1024)}MB / ${Math.round(mem.heapTotal / 1024 / 1024)}MB`);
}, 1000);
// Graceful shutdown
process.on('SIGINT', () => {
console.log('Shutting down...');
server.close(() => {
process.exit(0);
});
});
To use the above example, you need to install the toobusy-js package:
npm install toobusy-js
Flame Graphs
Flame graphs provide a visual representation of CPU sampling, helping to identify where time is spent in your application:
# Using 0x for flame graphs (install globally)
npm install -g 0x
# Run your application with 0x
0x app.js
# A browser will open with the flame graph visualization when the process exits
Benchmarking
Benchmarking helps compare different implementations to choose the most efficient one:
const Benchmark = require('benchmark');
const suite = new Benchmark.Suite;
// Add tests
suite
.add('RegExp#test', function() {
/o/.test('Hello World!');
})
.add('String#indexOf', function() {
'Hello World!'.indexOf('o') > -1;
})
.add('String#includes', function() {
'Hello World!'.includes('o');
})
// Add listeners
.on('cycle', function(event) {
console.log(String(event.target));
})
.on('complete', function() {
console.log('Fastest is ' + this.filter('fastest').map('name'));
})
// Run benchmarks
.run({ 'async': true });
To use the above example, you need to install the benchmark package:
npm install benchmark
Node.js Inspector
Node.js has an integrated debugger and profiler accessible through Chrome DevTools:
# Start an application with the inspector
node --inspect app.js
# Start and immediately break (for debugging)
node --inspect-brk app.js
Open Chrome and navigate to chrome://inspect
to access DevTools for your Node.js application. This provides access to:
- CPU profiler
- Memory heap snapshots
- Memory allocation timeline
- Debugger
Clinic.js Suite
Clinic.js is a collection of tools for diagnosing performance issues in Node.js applications:
# Install the Clinic.js suite
npm install -g clinic
# Use Doctor to identify issues
clinic doctor -- node app.js
# Use Flame to generate CPU flame graphs
clinic flame -- node app.js
# Use Bubbleprof for async operations analysis
clinic bubbleprof -- node app.js
Practical Performance Diagnosis
Step 1: Establish Baseline Metrics
Before optimizing, establish baseline metrics for your application:
const autocannon = require('autocannon');
const { writeFileSync } = require('fs');
// Run a benchmark against your application
const result = autocannon({
url: 'http://localhost:8080',
connections: 100,
duration: 10
});
// Save the results
result.on('done', (results) => {
console.log('Baseline performance metrics:');
console.log(` Requests/sec: ${results.requests.average}`);
console.log(` Latency: ${results.latency.average}ms`);
writeFileSync('baseline-metrics.json', JSON.stringify(results, null, 2));
});
Step 2: Identify Bottlenecks
Use profiling to identify bottlenecks:
- CPU profiling for compute-intensive operations
- Memory snapshots for memory leaks
- Flame graphs for call stack analysis
- Event loop monitoring for I/O and callback delays
Step 3: Fix and Verify
After applying optimizations, verify improvements against your baseline.
Common Performance Issues and Solutions
1. Memory Leaks
Signs: Increasing memory usage over time that doesn't plateau.
Solutions:
- Take heap snapshots at intervals and compare
- Check for global variables, event listeners, and closures that retain references
- Implement proper cleanup when objects are no longer needed
2. Long-Running Operations
Signs: High event loop lag, inconsistent response times.
Solutions:
- Move CPU-intensive work to worker threads
- Break long tasks into smaller chunks using setImmediate/process.nextTick
- Consider offloading to dedicated services
3. Inefficient Database Queries
Signs: Slow response times, high latency.
Solutions:
- Profile database operations
- Optimize queries with proper indexing
- Use connection pooling
- Implement caching for frequently accessed data