Dockerizing a Node.js web application is a great way to improve performance and reduce the amount of code you have to write. This article will show you how to Dockerize a Node.js web application with Express.js. First, create a new project in your development environment: git clone https://github.com/express/express-node cd express-node npm install Next, we need to create an Express app: npm init -d # This sets up the Express app for us express@0.9.x cd express-node # Now we can start building our web application! npm run build # Now our app is built and ready to use!
Docker is a containerization platform that simplifies the packaging and execution of applications. Containers run as isolated processes with their own filesystem but share their host’s kernel. Docker has risen to prominence as a way of implementing reproducible development environments and distributed deployment architectures.
Node.js is the leading JavaScript runtime for backend development. Successfully launching a Node.js web service requires you to have an environment with the runtime installed, your application code available, and a mechanism that handles automatic restarts in case of a crash.
In this guide we’ll use Docker to containerize a simple Node.js app created with the popular Express web framework. Docker is a good way to deploy Node-based systems as it produces a consistent environment that includes everything you need to run your service. The Docker daemon has integrated support for restarting failed containers when their foreground process crashes, solving one of the challenges of Node.js deployments.
Creating Your Node Project
We’ll skip the details of implementing your application. Create a directory for your project and add some server code inside it. Here’s a basic app.js that listens on port 8080 and responds to every request with a hardcoded response:
Add Express to your project using npm:
Start your app to test it works:
You should be able to visit localhost:8080 in your browser to see the sample response.
Writing a Dockerfile
Now it’s time to start Dockerizing your project. First you need an image for your application. Images encapsulate your code and dependencies as a single package that you use to start container instances. The instructions in your Dockerfile define the state of your containers’ initial filesystem.
Here’s a Dockerfile that works for the sample application:
This Dockerfile selects the official Node.js Docker image as its base via the FROM statement. The image inherits everything in the base, then adds additional content via the following instructions.
The working directory is set to /app by the WORKDIR line. The following COPY statements will deposit files into the /app directory inside the container image.
Installing Dependencies
The next stage is to add npm’s package.json and run npm ci. This will install your project’s npm dependencies – Express in this case – within the container’s filesystem.
Don’t use COPY node_modules/ . to copy the existing node_modules folder in your project directory – this would prevent you from reusing the Dockerfile in other build environments. Dockerfiles should let you create consistent builds with just the content of your source control repository. If a file or folder’s in your .gitignore, it shouldn’t be referenced in a Dockerfile COPY instruction.
Copying Application Code
After npm ci has run, your app’s code is copied into the image. The placement of this COPY instruction after the RUN, separating it from the previous copies, is deliberate. Each instruction creates a new layer in your image; Docker’s build process caches each layer to accelerate subsequent builds. Once the content of one layer changes, the cache of all following layers will be invalidated.
This is why application code should be copied in after npm ci has been executed. The code will usually change much more frequently than the content of your npm lockfile. Image rebuilds that only involve code changes will effectively skip the RUN npm ci stage (and all earlier stages), drastically accelerating the process when you’ve got a lot of dependencies.
Setting the Image’s Command
The final Dockerfile stage uses the CMD instruction to run your app automatically when the container starts. This works because the Node.js base image is configured to use the node process as its entrypoint. The CMD is appended to the inherited entrypoint, resulting in node app.js being run as the foreground process for your new image.
Building Your Image
Next you need to build your image:
Docker will take the Dockerfile in your working directory, run the instructions within it, and tag the resulting image as node-app:latest. The final . (period) specifies your working directory as the image build context. This determines the paths that can be referenced by the COPY instructions in your Dockerfile.
Build Optimization
One way to improve build performance is to add a .dockerignore file to the root of your project. Give the file the following content:
This file defines paths in your working directory that will not be included in the build context. You won’t be able to reference them inside your Dockerfile. In the case of node_modules, this directory’s content is irrelevant to the build as we’re installing the dependencies anew via the RUN npm ci instruction. Specifically excluding the node_modules already present in your working directory saves having to copy all those files into Docker’s temporary build context location. This increases efficiency and reduces the time spent preparing the build.
Starting a Container
At this point you’re ready to run your application using Docker:
The docker run command is used to start a new container instance from a specified image. A few extra flags are added to properly configure the container for the intended use case:
-d – Detaches your shell from the container’s foreground process, effectively running it as a background server. -p – Binds port 8080 on your host to port 8080 inside the container (which our Express sample app was configured to listen on). This means traffic to localhost:8080 will be passed through to the corresponding container port. You can change the host post to a different value by modifying the first part of the bind definition, such as 8100:8080 to access your container on localhost:8100. –name – Assigns the container a friendly name which you can use to reference it in other Docker CLI commands. –restart – Selects the restart policy to apply to the container. The on-failure setting means Docker will automatically restart the container if it exits with a failure code because your application crashed.
The image built in the previous step is referenced as the final argument to the docker run command. The container ID will be emitted to your terminal window; you should be able to access your Node.js app by visiting localhost:8080 again. This time the server’s running inside the Docker container, instead of using the node process installed on your host.
Summary
Docker helps you deploy Node.js web services by containerizing the entire application environment. You can start a container from your image with a single docker run command on any host with Docker installed. This removes the complexity of maintaining Node.js versions, installing npm modules, and monitoring for situations where your application process needs to be restarted.
When you’ve made code changes and want to launch your update, rebuild your Docker image and remove your old container with docker rm
You might want a slightly different routine in production. Although you can use a regular Docker installation with docker run, this tends to be unwieldy for all but the simplest applications. It’s more common to use a tool like Docker Compose or Kubernetes to define container configuration in a file that can be versioned inside your repository.
These mechanisms do away with the need to repeat your docker run flags each time you start a new container. They also facilitate container replication to scale your service and provide redundancy. If you’re deploying to a remote host, you’ll also need to push your image to a Docker registry so it can be “pulled” from your production machine.
Another production-specific consideration is how you’ll route traffic to your containers. Port binds can suffice to begin with but eventually you’ll reach a situation where you want multiple containers on one host, each listening on the same port. In this case you can deploy a reverse proxy to route traffic to individual container ports based on request characteristics such as domain name and headers.