Any production program must have logging. A solid logger in Node.js aids in auditing events, debugging problems, monitoring the health of your app, and feeding data to centralized systems (such ELK, Graylog, Datadog, etc.). Despite the existence of numerous libraries (such as Winston, Bunyan, and Pino), there are instances when you require a customized logger that is production-ready, lightweight, and structured to meet your unique requirements. This article explains how to implement a custom logger in Node.js for production. You’ll learn log levels, JSON structured logs, file rotation strategies, asynchronous writing, and integration tips for centralized logging. The examples are practical and ready to adapt.

Why Build a Custom Logger?
- Before you start, ask why you need a custom logger:
- Lightweight & focused: Only include the features you need.
- Consistent JSON output: Useful for log aggregation and search.
- Custom transports: Send logs to files, HTTP endpoints, or message queues.
- Special formatting or metadata: Add request IDs, user IDs, or environment tags.
That said, if you need high performance and battle-tested features, consider existing libraries (Pino, Winston). But a custom logger is great when you want control and simplicity.
Key Requirements for Production Logging
For a production-ready logger, ensure the following:
- Log levels (error, warn, info, debug) with configurable minimum level.
- Structured output — JSON logs with timestamp, level, message, and metadata.
- Asynchronous, non-blocking writes to avoid slowing your app.
- Log rotation (daily rotation or size-based) and retention policy.
- Integration-friendly: support for stdout (for containers) and file or HTTP transports.
- Correlation IDs for tracing requests across services.
- Safe shutdown — flush buffers on process exit.
Basic Custom Logger (Simple, Sync to Console)
Start small to understand the shape of a logger. This basic example prints structured logs to the console.
// simple-logger.js
const levels = { error: 0, warn: 1, info: 2, debug: 3 };
const defaultLevel = process.env.LOG_LEVEL || 'info';
function formatLog(level, message, meta) {
return JSON.stringify({
timestamp: new Date().toISOString(),
level,
message,
...meta
});
}
module.exports = {
log(level, message, meta = {}) {
if (levels[level] <= levels[defaultLevel]) {
console.log(formatLog(level, message, meta));
}
},
error(msg, meta) { this.log('error', msg, meta); },
warn(msg, meta) { this.log('warn', msg, meta); },
info(msg, meta) { this.log('info', msg, meta); },
debug(msg, meta) { this.log('debug', msg, meta); }
};
Limitations: console output is fine for local development and containers (stdout), but you need file rotation, non-blocking IO, and transports for production.
Asynchronous File Transport (Non-blocking)
Writing to files synchronously can block the event loop. Use streams and async writes instead.
// file-logger.js
const fs = require('fs');
const path = require('path');
class FileTransport {
constructor(filename) {
this.filePath = path.resolve(filename);
this.stream = fs.createWriteStream(this.filePath, { flags: 'a' });
}
write(line) {
return new Promise((resolve, reject) => {
this.stream.write(line + '\n', (err) => {
if (err) return reject(err);
resolve();
});
});
}
async close() {
return new Promise((resolve) => this.stream.end(resolve));
}
}
module.exports = FileTransport;
Use the transport in your logger to offload writes.
A Minimal Production-ready Logger Class
This logger supports multiple transports (console, file), JSON logs, async writes, log level filtering, and graceful shutdown.
// logger.js
const FileTransport = require('./file-logger');
const LEVELS = { error: 0, warn: 1, info: 2, debug: 3 };
class Logger {
constructor(options = {}) {
this.level = options.level || process.env.LOG_LEVEL || 'info';
this.transports = options.transports || [console];
this.queue = [];
this.isFlushing = false;
// On process exit flush logs
process.on('beforeExit', () => this.flushSync());
process.on('SIGINT', async () => { await this.flush(); process.exit(0); });
}
log(level, message, meta = {}) {
if (LEVELS[level] > LEVELS[this.level]) return;
const entry = {
timestamp: new Date().toISOString(),
level,
message,
...meta
};
const line = JSON.stringify(entry);
this.transports.forEach((t) => {
if (t === console) console.log(line);
else t.write(line).catch(err => console.error('Log write failed', err));
});
}
error(msg, meta) { this.log('error', msg, meta); }
warn(msg, meta) { this.log('warn', msg, meta); }
info(msg, meta) { this.log('info', msg, meta); }
debug(msg, meta) { this.log('debug', msg, meta); }
async flush() {
if (this.isFlushing) return;
this.isFlushing = true;
const closes = this.transports
.filter(t => t !== console && typeof t.close === 'function')
.map(t => t.close());
await Promise.all(closes);
this.isFlushing = false;
}
// Synchronous flush for quick shutdown hooks
flushSync() {
this.transports
.filter(t => t !== console && t.stream)
.forEach(t => t.stream.end());
}
}
module.exports = Logger;
Usage
const Logger = require('./logger');
const FileTransport = require('./file-logger');
const file = new FileTransport('./logs/app.log');
const logger = new Logger({ level: 'debug', transports: [console, file] });
logger.info('Server started', { port: 3000 });
logger.debug('User loaded', { userId: 123 });
Log Rotation and Retention
Production logs grow fast. Implement rotation either:
- Externally (recommended): use system tools like logrotate or container-friendly sidecars.
- Inside app: implement daily or size-based rotation. You can use libraries (e.g., rotating-file-stream or winston-daily-rotate-file) for robust behavior.
Why external rotation is recommended: it separates concerns and avoids complicating app logic. In containers, prefer writing logs to stdout and let the platform handle rotation and centralization.
Structured Logs and Centralized Logging
For production, prefer structured JSON logs because they are machine-readable and searchable. Send logs to:
- ELK (Elasticsearch, Logstash, Kibana)
- Datadog / New Relic
- Graylog
- Fluentd / Fluent Bit
You can implement an HTTP transport to forward logs to a collector (with batching and retry):
// http-transport.js (very simple)
const https = require('https');
class HttpTransport {
constructor(url) { this.url = url; }
write(line) {
return new Promise((resolve, reject) => {
const req = https.request(this.url, { method: 'POST' }, (res) => {
res.on('data', () => {});
res.on('end', resolve);
});
req.on('error', reject);
req.write(line + '\n');
req.end();
});
}
close() { return Promise.resolve(); }
}
module.exports = HttpTransport;
Important: Batch logs, add retry/backoff, and avoid blocking the app when the remote endpoint is slow.
- Security and Privacy Considerations
- Never log sensitive data (passwords, full credit-card numbers, tokens). Mask or redact sensitive fields.
- Use environment variables for configuration (log level, endpoints, credentials).
- Audit log access and store logs in secure storage with retention policies.
Correlation IDs and Contextual Logging
For debugging requests across services, attach a correlation ID (request ID). In Express middleware, generate or read a request ID and pass it to logging.
// middleware.js
const { v4: uuidv4 } = require('uuid');
module.exports = (req, res, next) => {
req.requestId = req.headers['x-request-id'] || uuidv4();
res.setHeader('X-Request-Id', req.requestId);
next();
};
// usage in route
app.get('/user', (req, res) => {
logger.info('Fetching user', { requestId: req.requestId, userId: 42 });
});
Pass requestId into logger metadata so aggregated logs can be searched by request ID in your log platform.
Monitoring, Alerts, and Metrics
Logging is only useful if you monitor and alert on it:
- Create alerts for error spikes.
- Track log volume and latency of transports.
- Emit metrics (e.g., count of errors) to Prometheus or your APM.
- Example: increment a counter whenever logger.error() is called.
When to Use a Library Instead
- Custom logger is useful for small or specialized needs. For large-scale production systems, consider libraries:
- Pino — super-fast JSON logger for Node.js.
- Winston — flexible, supports multiple transports.
- Bunyan — structured JSON logs with tooling.
These libraries handle performance, rotation, and transports for you. You can also wrap them to create a simple API for your app.
Checklist: Production Logging Ready
- JSON structured logs
- Configurable log level via env var
- Non-blocking writes and transports
- Log rotation/retention strategy
- Correlation IDs and contextual metadata
- Sensitive data redaction
- Centralized logging integration
- Graceful shutdown & flush
Summary
A production-ready custom logger in Node.js should be simple, non-blocking, structured, and secure. Build a small core logger that formats JSON logs and supports transports (console, file, HTTP). For rotation and aggregation, prefer external systems (logrotate, container logs, or centralized logging platforms). Add correlation IDs, redact sensitive information, and flush logs on shutdown. When your needs grow, consider using high-performance libraries like Pino or Winston and adapt them to your environment. Implementing logging correctly makes debugging easier, improves observability, and helps you run reliable production services.