11 Best Practices for Logging in Node.js | Better Stack Community

Logging is a great way to gain insight into
your application’s behavior in real-world conditions. It can be a valuable
source of data for troubleshooting issues and identifying trends that may be
useful for guiding product decisions.

To derive the greatest value from your logs, it is necessary to set up a logging
strategy that follows the widely accepted practices for logging so that you can
quickly and easily understand how your whole system runs and even solve problems
before they impact end-users.

With this in mind, let’s examine some best practices to follow when logging in a Node.js application.

Logtail dashboard
Logtail dashboard

🔭 Want to centralize and monitor your Node.js logs?

Head over to Logtail and start ingesting your logs in 5 minutes.

1. Choose a standard and configurable logging framework

Node.js does not have a logging framework in its standard
library
(the Console module doesn’t count)
, but there is no shortage of third-party logging frameworks that are available
through npm. Winston,
Pino, and Bunyan
are some of the most popular ones. They all offer similar features and are
configurable and extensible enough to meet the needs of most applications. If
you’re looking for simpler and lighter solutions, packages like
bole and
loglevel are also available.

Unless your needs are highly specialized, you should prefer using an existing
open source logging framework instead of rolling out a proprietary solution.
You’ll reap the benefits of using something that is battle-tested and you won’t
have to reinvent the wheel or start maintaining a package that is only
tangentially related to your business purpose.

It is also important to ensure that you are not locked into a specific library.
You can encapsulate your logging framework of choice in a class such that your
application does not interact with the framework directly but calls the methods
on the class. If you ever need to swap the logging framework for another, you’ll
only need to do this in one place, but it will be more complicated if the
logging framework permeates your entire application.

In this post, I will demonstrate various logging concepts using
Winston
and Pino as
they are the two most popular solutions for logging in the Node.js ecosystem,
but I think Pino is the better choice for most people due to its balance of
speed and features.

2. Log using a structured format

One of the easiest ways to improve your logging setup is to use a structured
format to ensure that your logs can be processed effectively by various logging
tools. An unstructured log entry is made up of strings meant to be read by
humans. These strings may often contain variables that can be easily recognized
by humans but difficult to detect by machines. Here’s an example of an
unstructured log entry:

 

Copied!

User 'xyz' moved card 'abc' to board '123'

The above entry is easy enough to understand by a human reader, but imagine if
you have thousands of entries structured in such a manner. You would have to
write custom (and often complicated) regular expressions to filter the logs, and
automated tools many not be able to process your logs for you since the format
isn’t consistent. These problems are eliminated when you use a structured format
to organize the data at the point of generation.

For example, here’s the previous log entry in a structured format:

 

Copied!

{"boardId":"123","cardId":"abc","message":"User 'xyz' moved card 'abc' to board '123'","userId":"xyz"}

Notice how each variable is now placed in a distinct property so that it’s easy
to group or filter events by some criteria. JSON has emerged as the most popular
structured format out there due to its widespread support but there are
alternatives like LogFmt. Context doesn’t
replace the need for meaningful messages so try not to do something like this:

 

Copied!

{"boardId":"123","cardId":"abc","message":"card moved","userId":"xyz"}

Structured logs have one major drawback: they are less readable than
unstructured logs (just compare the two examples above). However, since humans
are typically not the primary audience for structured log records, this is
usually not a problem in practice. Instead, they are meant to be processed
further by automated tools before the results of a query are presented in a
human-friendly format.

Both Winston and Pino default to logging in JSON, so we will continue to present
any sample log entries in the JSON format for the rest of this article. Pino
also provides a way to
convert structured logs to a more readable format
in development environments. It involves installing the pino-pretty module
globally like this:

 

Copied!

npm install -g pino-pretty

Afterward, you can pipe your program’s output to the pino-pretty command as
shown below:

 

Copied!

node index.js | pino-pretty

A log entry like this:

 

Copied!

{"level":30,"time":"2022-06-16T14:33:14.245Z","pid":373373,"hostname":"fedora","user_id":"283487","msg":"user profile updated"}

will now be transformed to the following in the terminal:

 

Copied!

[2022-06-16T14:33:39.526Z] INFO (373617 on fedora): user profile updated
    user_id: "283487"

3. Use the correct log level

Log levels define the severity or urgency of a log
entry. For example, a message that reads like this:

 

Copied!

{"message":"user 'xyz' created successfully"}

has a much different implication than one that reads like this:

 

Copied!

{"message":"database failed to connect"}

The former describes an event that occurred during normal operation of the
software, while the latter is drawing attention to some critical error that
could be causing an issue.

Without log severity levels, it would be difficult to set up an automated
alerting system that notifies you when the application produces a log entry that
demands attention. Reading the logs would also be a painful or even impractical
process since you might have to read every entry to determine which ones are
relevant to the task at hand.

Here’s an example of how the messages above will look like with the correct log
level attached to them:

 

Copied!

{"level":"info","message":"user 'xyz' created successfully"}
{"level":"fatal","message":"database failed to connect"}

Usually, an entry’s severity will be presented in a level property as shown
above. The contents of this property could be a string (info, error, warn,
etc), or an integer constant. For example, Winston defaults to a string log
level output, but Pino uses an integer constant by default:

 

Copied!

// Winston
{"level":"info","message":"user 'xyz' created successfully"}

// Pino (30 represents the info level)
{"level":30,"time":1655358744512,"pid":309903,"hostname":"fedora","msg":"user 'xyz' created successfully"}

You can also configure Pino to use strings instead for greatest compatibility
with various log management tools and platforms:

 

Copied!

const logger = pino({

formatters: {

level(level) {

return { level };

},

},

}); logger.info("user 'xyz' created successfully");

Output

{"level":"info","time":1655359109514,"pid":310899,"hostname":"fedora","msg":"user 'xyz' created successfully"}

Another thing that varies between different frameworks is the default log
levels. For example, Winston uses the following log levels:

 

Copied!

{
  error: 0,
  warn: 1,
  info: 2,
  http: 3,
  verbose: 4,
  debug: 5,
  silly: 6
}

while Pino’s default levels are:

 

Copied!

{
  fatal: 60,
  error: 50,
  warn: 40,
  info: 30,
  debug: 20,
  trace: 10
}

Winston’s log levels are derived from NPM, but Pino’s are more in line with what
you’re likely to encounter in the wider software development ecosystem. You can
override the defaults in both frameworks with your preferences if you wish. For
example, Winston can be configured to use Pino’s defaults through the following
snippet:

 

Copied!

const winston = require('winston');

const logLevels = {
  fatal: 0,
  error: 1,
  warn: 2,
  info: 3,
  debug: 4,
  trace: 5,
};

const logger = winston.createLogger({
  levels: logLevels,
});

The only difference here is that Pino uses a larger integer value to indicate
greater severity but the reverse is the case for Winston. Here’s what each of
the log levels above mean (listed in ascending order of urgency):

  • TRACE: this level should be used when tracing the path of a program’s
    execution.
  • DEBUG: any messages that may be needed for troubleshooting or diagnosing
    issues should be logged at this level.
  • INFO: this level should be used when capturing a typical or expected event
    that occurred during normal program execution, usually things that are notable
    from a business logic perspective.
  • WARN: log at this level when an event is unexpected but recoverable. You
    can also use it to indicate potential problems in the system that need to be
    mitigated before they become actual errors.
  • ERROR: any error that prevents normal program execution should be logged
    at this level. The application can usually continue to function, but the error
    must be addressed if it persists.
  • FATAL: use this level to log events that prevent crucial business
    functions from working. In situations like this, the application cannot
    usually recover, so immediate attention is required to fix such issues.

The use of log levels also allows you to control the amount of logs that your
application generates. For example, when debugging your Node.js application, or in a testing
environment, it makes sense to log as much information as possible about the
program and this means logging at the DEBUG or TRACE level. Production
environments will typically default to INFO to avoid getting bogged down by
lots of debugging entries.

Both Pino and Winston default to the INFO level. This means that only messages
logged at a severity of INFO or greater will be produced while all others are
suppressed. It’s advisable to control this setting via an environmental variable
so that you can update it without modifying the application code.

 

Copied!

const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
});

const logger = pino({
  level: process.env.LOG_LEVEL || 'warn',
});

4. Always log to the standard output

Most logging frameworks allow you to configure where you want to output your
logs to. Both Pino and
Winston offer built-in and
third-party transports for transmitting log output to one or more destinations
such as the console, a file, an HTTP endpoint, a database, or some other
location. However, I recommend you always send your application logs to the
standard output and do any further processing or redirection using an external
program.

This is a good practice since the application’s behavior can be observed by
directly by inspecting the logs in the terminal during local development. In a
production environment, the log stream can be captured by a log router such as
Vector, Fluentd, or
LogStash, and routed to one or more
destinations for long-term analysis and storage.

This approach provides the greatest flexibility for choosing where your logs
should go in different environments as the dedicated log routers often have a
multitude of options that are often not available or difficult to implement in
logging frameworks. It also prevents your application from consuming limited
resources unnecessarily since the task of routing logs is now being handled by
an external program.

Pino logs to the standard output by default, but Winston has to be configured to
do so through the transports option on a logger instance:

 

Copied!

const winston = require('winston');
const logger = winston.createLogger({

transports: [new winston.transports.Console()],

});

The exception to this rule is when you don’t have complete control over the
environment that your application is being deployed to. In such cases, you can
investigate how the platform handles logging or utilize the solutions offered by
your logging framework of choice.

Including a timestamp in each log entry is one of the most essential steps you
can take to organize your log entries. If you do not have a way to distinguish
between logs that were recorded five minutes ago from those ones that were
recorded five months ago, it will be challenging to find the log entries you
need to debug an issue at any given moment.

Some frameworks output a timestamp by default without additional configuration.
For example, Pino outputs the number of milliseconds elapsed since January 1,
1970 00:00:00 UTC (Date.now()) in a time property as seen below:

 

Copied!

const pino = require('pino')();
pino.info("Hello")

Output

{"level":30,"time":1655234000831,"pid":214017,"hostname":"fedora","msg":"Hello"}

On the other hand, Winston does not include a timestamp in its JSON-formatted
log entries without additional configuration. You have to combine its json()
format with its timestamp() format to produce the timestamp in each log entry:

 

Copied!

const winston = require('winston');
const { combine, timestamp, json } = winston.format;

const logger = winston.createLogger({
  level: 'http',
  format: combine(timestamp(), json()),
  transports: [new winston.transports.Console()],
});

logger.info('Hello');

This produces a timestamp property that expresses the date and time in the
ISO-8601 format:

Output

{"level":"info","message":"Hello","timestamp":"2022-06-15T04:32:19.955Z"}

Usually, the framework will provide a way to customize the timestamp format and
the name of the property that contains the timestamp. We generally recommend
using the ISO-8601 format and timestamp property respectively. There are a few
variants of the ISO-8601 format but we recommend logging using
Coordinated Universal Time (UTC) time
to remove timezone ambiguity.

Winston’s timestamp output uses UTC time by default. It is also known as “Zulu
time”, hence the Z at the end. The T delimiter separates the date from the
time, and the .955 part in the time segment expresses the exact millisecond in
which the event occurred. If you record timestamps in UTC time, it will be easy
to convert to any timezone simply by adding or subtracting the offset from UTC.

For example, to convert the above timestamp to Eastern Standard Time (EST) ()
featuring an offset of -5 from UTC), you should subtract five hours from the UTC
time so that it becomes:

 

Copied!

2022-06-14T23:32:19.955-05:00

You can easily configure Pino (and most other frameworks) to output the
timestamp in the ISO-8601 format:

 

Copied!

const pino = require("pino");
const logger = pino({

timestamp: pino.stdTimeFunctions.isoTime,

}); logger.info("Hello")

Output

{"level":30,"time":"2022-06-15T05:03:17.639Z","pid":226316,"hostname":"fedora","msg":"Hello"}

If you want to rename the time property to timestamp in Pino, you may use
the following configuration:

 

Copied!

const logger = pino({

timestamp: () => `",timestamp":"${new Date(Date.now()).toISOString()}"`,

});

6. Be as descriptive as possible

One way to ensure that each log entry provide the valuable insight is to provide
adequate detail about the event being logged. You need to anticipate that your
application logs may be the only data available during a troubleshooting session
where an urgent fix is needed, so you should include as much relevant
information as possible in the message. Your logs should be verbose rather than
useless although you should not go so far as to including irrelevant or
superfluous details.

Here’s an example of some bad log entries that isn’t too helpful to someone
reading the log:

 

Copied!

{"level":"info","message":"attempting to lock file","timestamp":"2022-06-15T08:01:35.447Z"}
{"level":"warn","message":"unable to lock file","timestamp":"2022-06-15T08:01:36.210Z"}

And here’s a better version of the messages:

 

Copied!

{"level":"info","message":"attempting to lock file 'foo.txt'","timestamp":"2022-06-15T08:01:35.447Z"}
{"level":"warn","message":"unable to lock file 'foo.txt', will try again in 10 seconds","timestamp":"2022-06-15T08:01:36.210Z"}

This improved version gives more context on the entity that caused the issue and
what action the program is taking to resolve it. Such entries may followed by
one that looks like this approximately 10 seconds later:

 

Copied!

{"level":"info","message":"attempting to lock file 'foo.txt","timestamp":"2022-06-15T08:01:46.447Z"}
{"level":"info","message":"file 'foo.txt' was locked successfully","timestamp":"2022-06-15T08:01:47.210Z"}

Here are some other examples of descriptive log messages to emulate in your
application:

 

Copied!

{"level":"info","message":"starting example.com on PID 228471","timestamp":"2022-06-15T08:24:56.630Z"}
{"level":"error","message":"unable to open listener for installer. Is the application already running?","timestamp":"2022-06-15T08:24:56.632Z"}
{"level":"warn","message":"cannot find config file at /home/user/.config/app.conf, falling back to application defaults","timestamp":"2022-06-15T08:24:56.633Z"}

7. Add enough context to your log messages

Contextual logging refers to the act of adding extra details to a log entry and
sharing such details across related events. Usually, you want to include
something that uniquely identifies the operation being performed, such as
request, transaction, or user IDs. This lets you relate log entries based on
such identifiers and track the flow of a transaction across machines, networks,
and services.

Both Pino and Winston allow you to pass local context at the log point through
an optional object argument before and after the log message respectively. For
example, with Pino:

 

Copied!

const pino = require('pino');
const logger = pino({
  timestamp: () => `"timestamp":"${new Date(Date.now()).toISOString()}"`,
});

logger.info(
  {
    requestId: "f9ed4675f1c53513c61a3b3b4e25b4c0",
  },
  "Uploading 'image.png' was successful",
);

Output

{"level":30"timestamp":"2022-06-15T13:19:15.619Z","pid":261078,"hostname":"fedora","requestId":"f9ed4675f1c53513c61a3b3b4e25b4c0","msg":"Uploading 'image.png' was successful"}

Notice how the requestId property is added to the JSON output for the log
entry. You can add other relevant data about the event being logged in this
manner so that it will be easy to group related events or filter them based on
your chosen criteria. Such data can also be invaluable when gathering analytics
about the system.

If you need to repeat some contextual data across multiple log entries, it’s not
optimal to continually duplicate the data at the point of logging. Instead, you
can use the concept of child loggers to prevent this repetition. A child logger
inherits all properties from its parent but can also accept additional metadata.
Winston and Pino support calling the .child() method on a logger instance to
create a child logger.

 

Copied!

const winston = require("winston");
const { combine, timestamp, json } = winston.format;

const logger = winston.createLogger({
  level: "info",
  format: combine(timestamp(), json()),
  transports: [new winston.transports.Console()],
});

const child = logger.child({
  userID: "ou04iu22i",
});

child.info("user profile updated successfully");

Output

{"level":"info","message":"user profile updated successfully","timestamp":"2022-06-15T18:49:09.696Z","userID":"ou04iu22i"}

Henceforth, each log entry created with the child logger will contain the
userID property which allows you to trace the actions performed by a specific
user easily. You can also use this approach to ensure that certain metadata is
included in all the logs produced by a particular service.

Although this child logger approach works well enough, it doesn’t account for
situations where you need to pass some context across multiple scopes in which
the contextual data is inaccessible in every scope. For example, identifying a
thread of execution via the request ID may be difficult to achieve unless all
your business logic is defined within route handlers (not recommended) or you
explicitly pass the request ID (or a context object) to any function called from
the route handler.

In such situations, you can investigate the use of
Continuation-Local Storage (CLS) for
keeping track of asynchronous context that is propagated through a chain of
function calls. You can use a library like
cls-proxify to integrate CLS with
your server or logging framework and create child loggers per each request with
dynamic context from the request itself. You can find out more about the idea
behind CLS by reading
this article on the subject.

8. Always include a stack trace when logging exceptions

You should ensure that your logging framework includes the full stack trace when
logging an exception so that anyone reading the log entry can find the necessary
information to diagnose where exception occurred. If you’re using Winston, you
may be surprised to see that errors are essentially ignored by default until you
explicitly configure them to be logged:

 

Copied!

const winston = require("winston");
const { combine, timestamp, json } = winston.format;
const logger = winston.createLogger({
  level: "info",
  format: combine(timestamp(), json()),
  transports: [new winston.transports.Console()],
});

logger.error(new Error("an error"));

Output

{"level":"error","timestamp":"2022-06-15T21:16:30.340Z"}

Notice how the error details are absent from the log entry. This is a surprising
default since errors and exceptions are probably the most common use case for
logging. Pino has a more reasonable default here:

 

Copied!

const pino = require("pino");
const logger = pino({
  timestamp: pino.stdTimeFunctions.isoTime,
});

logger.error(new Error("an error"));

Output

{"level":50,"time":"2022-06-15T21:23:04.436Z","pid":285659,"hostname":"fedora","err":{"type":"Error","message":"an error","stack":"Error: an error\n    at Object.<anonymous> (/home/ayo/dev/betterstack/betterstack-community/demo/snippets/main.js:23:9)\n    at Module._compile (node:internal/modules/cjs/loader:1105:14)\n    at Module._extensions..js (node:internal/modules/cjs/loader:1159:10)\n    at Module.load (node:internal/modules/cjs/loader:981:32)\n    at Module._load (node:internal/modules/cjs/loader:827:12)\n    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)\n    at node:internal/main/run_main_module:17:47"},"msg":"an error"}

This entry contains an err object that indicates the type of the error (which
could be the base Error class or a custom type that is derived from Error),
the error message, and a complete stack trace. This is an excellent output as
you can easily find errors by creating a filter on the err property, or even
specific types of errors through err.type.

Winston can also be made to output a stack trace with some further
configuration. You need to add the
errors format as an argument to
combine(), and specify that the stack trace should be included.

 

Copied!

const winston = require("winston");

const { combine, timestamp, json, errors } = winston.format;

const logger = winston.createLogger({ level: "info",

format: combine(errors({ stack: true }), timestamp(), json()),

transports: [new winston.transports.Console()], }); logger.error(new Error("an error"));

With the above configuration in place, Winston will populate the message
property with the error message, and a stack property will be present with the
full stack trace.

Output

{"level":"error","message":"an error","stack":"Error: an error\n    at Object.<anonymous> (/home/ayo/dev/betterstack/betterstack-community/demo/snippets/main.js:23:9)\n    at Module._compile (node:internal/modules/cjs/loader:1105:14)\n    at Module._extensions..js (node:internal/modules/cjs/loader:1159:10)\n    at Module.load (node:internal/modules/cjs/loader:981:32)\n    at Module._load (node:internal/modules/cjs/loader:827:12)\n    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)\n    at node:internal/main/run_main_module:17:47","timestamp":"2022-06-15T21:34:49.904Z"}

It is also important to account for uncaught exceptions and uncaught promise
rejections in your application. These should be logged before the process exits
(and is restarted by something like PM2). Winston provides the
exceptionHandlers and rejectionHandlers properties which may be configured
on a logger instance:

 

Copied!

const logger = winston.createLogger({
  level: 'info',
  format: combine(errors({ stack: true }), timestamp(), json()),
  transports: [new winston.transports.Console()],

exceptionHandlers: [

new winston.transports.Console({ consoleWarnLevels: ['error'] }),

],

rejectionHandlers: [

new winston.transports.Console({ consoleWarnLevels: ['error'] }),

],

});

The configuration above will log uncaught exceptions and unhandled promise
rejections to the standard error before the process exits. In either case, the
log entries produced will include the full stack trace as well other relevant
details about the process regardless of whether the errors format is used or
not.

On the other hand, Pino does not have a special mechanism for logging uncaught
exceptions or promise rejections, but you can listen for the uncaughtException
and unhandledRejection events:

 

Copied!

const pino = require('pino');
const logger = pino({
  timestamp: pino.stdTimeFunctions.isoTime,
});

process.on('uncaughtException', (err) => {
  logger.fatal(err);
  process.exit(1);
});

process.on('unhandledRejection', (err) => {
  logger.fatal(err);
  process.exit(1);
});

9. Don’t log sensitive information

Sensitive information about your users should never make it into your log
entries so that they are not at risk of being used maliciously. Such data could
include passwords, credit card details, or authorization tokens. In some cases,
IP addresses are also considered to be Personally Identifiable Information
(PII).

In 2018, Twitter had to advise their users to change passwords because they had
accidentally recorded millions of plaintext passwords to an internal log.
Although they didn’t find any evidence of misuse, it remains an example of how
your application logs can potentially compromise user security or privacy if
adequate care is not taken. If an attacker can retrieve confidential information
from your logs, regulatory fines connected to violations of GDPR in Europe, CCPA
Data Privacy Law in California, or other similar data compliance laws may be
enforced against your business.

Relying on standard techniques like hashing to obfuscate and anonymize personal
information especially those within a well known and predictable range is
potentially dangerous as they would remain susceptible to dictionary and rainbow
table attacks. If you must log sensitive data, consider using an ID token that
references original sensitive data instead. With the proper permissions, you can
use the token IDs to retrieve the securely stored original data when necessary.

Other techniques to prevent sensitive information from being logged include:

  • Code reviews where the reviewer must verify that no sensitive data is logged
    in the pull request before the code is merged.
  • Building heuristics in your structured logging pipeline such that known
    sensitive fields are automatically removed at log point (see below).
  • Setting up an automated service that continually searches the logs and alerts
    the team if sensitive data is found so that it can be scrubbed immediately.

Pino has a log redaction feature that can
help with preventing sensitive data from making it into your logs. When setting
up a logger instance, you can provide a list of keys that should be redacted
from the entry. Then, you can decide to replace the redacted item with a
placeholder or remove it entirely from the log.

 

Copied!

const pino = require('pino');
const logger = pino({
  level: process.env.PINO_LOG_LEVEL || 'debug',
  timestamp: pino.stdTimeFunctions.isoTime,
  redact: ['name', 'email', 'password', 'profile.address', 'profile.phone'],
});

const user = {
  name: 'John doe',
  id: '283487',
  email: '[email protected]',
  profile: {
    address: '1, Avengers street',
    phone: 123456789,
    favourite_color: 'Red',
  },
};

logger.info(user, 'user profile updated');

Logging the user object above will produce the following output:

Output

{"level":30,"time":"2022-06-16T13:19:48.610Z","pid":362100,"hostname":"fedora","name":"[Redacted]","id":"283487","email":"[Redacted]","profile":{"address":"[Redacted]","phone":"[Redacted]","favourite_color":"Red"},"msg":"user profile updated"}

Notice how the fields mentioned in the redact array have been replaced in the
log with the [Redacted] placeholder. You can also specify that the fields be
removed instead by changing the configuration to the following:

 

Copied!

const pino = require('pino');
const logger = pino({
  level: process.env.PINO_LOG_LEVEL || 'debug',
  timestamp: pino.stdTimeFunctions.isoTime,

redact: {

paths: ['name', 'password', 'profile.address', 'profile.phone'],

remove: true,

},

}); . . .

The following output will now be produced:

Output

{"level":30,"time":"2022-06-16T13:25:02.141Z","pid":364057,"hostname":"fedora","id":"283487","profile":{"favourite_color":"Red"},"msg":"user profile updated"}

Winston does not provide a built-in feature to redact secrets, but you can
use a custom format
to implement this feature. See the examples provided in this
GitHub issue.

Ensure not to rely on the redaction techniques demonstrated above to prevent
sensitive data from making it into your logs, but use them as an extra layer of
protection just in case something escaped your attention in the review process.
In the above example, you would ideally log only the user ID instead of using
the entire user object at log point.

10. Log for more than troubleshooting purposes

Logging is useful for more than just troubleshooting. It can also be employed
for auditing or profiling purposes or to compute interesting statistics about
user behavior which can be a valuable guide for future product decisions.

Audit logging involves documenting activities within the application that are
significant to enforcing some business policy or compliance with regulatory
requirements. Typically, the following types of activities will be logged:

  • Administrative tasks like creating or deleting users.
  • Authentication attempts (both successful and failed), and when access is
    granted or denied to resources.
  • Data access and modifications (user updates profile, document access, etc).
  • High-risk events like data exports.
  • Updates that impact the entire application.

Keeping track of such details has several benefits, including the following:

  • You can use such logs to prove that your application complies with relevant
    regulations in your industry.
  • You can detect security breaches or other incidents and reconstruct the
    timeline of events leading up to the event.
  • You can use audit logs to prove an event’s validity in legal proceedings.
  • Audit logs are also helpful for keeping the users of your application
    accountable in case of disputes.

You can also use logging to profile certain aspects of your application. Since
each log entry contains a timestamp, you can log at the start and end of
operation to generate performance metrics that may help inform you about what
parts of the application could do with some optimization.

Winston provides some basic profiling tools that you can take advantage of:

 

Copied!

const winston = require('winston');
const { combine, timestamp, json, errors } = winston.format;
const logger = winston.createLogger({
  level: 'info',
  format: combine(errors({ stack: true }), timestamp(), json()),
  transports: [new winston.transports.Console()],
});

const profiler = logger.startTimer();

setTimeout(() => {
  // End the timer and log the duration
  profiler.done({ message: 'Timer completed' });
}, 1000);

This produces a durationMs property in the log entry that represents the
timer’s duration in milliseconds.

Output

{"durationMs":1002,"level":"info","message":"Timer completed"}

11. Centralize your logs in one place

When your application is deployed to production, it will start generating logs
immediately usually stored on the host server. If you only have to manage one or
two servers, logging into each one might be practical enough to view and analyze
the logs. But when your start scaling your application across dozens of servers,
such practice becomes tedious and ineffective.

The solution is to aggregate all your log data and consolidate them in one
place. There are so many solutions for collecting and centralizing logs. Some
are open source solutions that can be deployed in-house within your existing IT
infrastructure, while others are Saas cloud logging providers like
Logtail that let you that aggregate and
analyze your logs within minutes. The latter is a great solution if you don’t
have the engineering resources to deal with the operational complexity of
managing an on-premise log management infrastructure, or if it is not
cost-efficient to do so.

Centralizing your application logs provides several benefits, including the
following:

  • It provides you with an in-depth view of all your application logs regardless
    of how many instances are active.
  • You can create personalized alerts based on metrics you define on the logs.
    For example, you can alert your team each time a FATAL error occurs on a
    server or if a specific ERROR event is repeated multiple times in quick
    succession.
  • It’s easy to visualize and share insights derived from your logs with other
    stakeholders in the organization.
  • Your logs remain accessible even when the origin servers cannot be reached
    temporarily.
  • It can help with long-term storage and enforcing log retention policies.

When choosing a cloud logging solution, you must assess it for the following
requirements to make sure its a good fit for your product:

  • A free trial and easy setup.
  • Integration with other services in your stack.
  • Billing rate based on log volume.
  • Whether the product can handle your expected daily log volume.
  • Support for log-term log archiving for compliance and auditing.
  • Low latency for log ingestion and live monitoring.
  • Ability to create customizable charts and dashboards from your log data.
  • Fast search and filter performance at full capacity.
  • Secure and compliant data transmission and storage.
  • Highly customizable alerting features.

Screenshot of livetail section
Screenshot of livetail section

Logtail offers all of the above features and more. You can effortlessly collect,
filter and correlate log data from several sources, and analyze them all in one
place. You also get built-in collaboration tools for sharing insights with
teammates or drawing attention to a specifc issue. To get started with Logtail,
sign up for a free account and
read the docs to examine the options for
integrating it into your application.

Although frameworks like Winston and Pino can transmit your logs directly to
Logtail (see our
JavaScript setup guide) , it
is better to continue outputting them to the console and use a log routing tool
(see tip #4) like
Vector to forward them to the
service. Once your logs start coming through, you will see them appear in the
Live Tail section of the dashboard.

Conclusion and next steps

In this article, we’ve covered several practices that should help you write more
useful logs in your Node.js applications. At this point, you should feel more
comfortable with the idea of logging and how to design an effective logging
strategy in a Node.js app. Your applications will be significantly more robust
and production-ready as a result.

Thanks for reading, and happy logging!

Logs tell stories.

Read them.

Experience SQL-compatible

structured log management.

Explore logging →

Centralize all your logs into one place.

Analyze, correlate and filter logs with SQL.

Create actionable

dashboards.

Share and comment with built-in collaboration.

Start logging

Next article

A Comprehensive Guide to Logging in Python

Python provides a built-in logging module in its standard library that provides comprehensive logging capabilities for Python programs

Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.