Using Docker Compose to Run NestJS Applications with Redis and Postgres

Syed Muhammed Hassan Ali
Stackademic
Published in
9 min readOct 15, 2023

--

Introduction

Developing web applications, especially open-source ones, can be a complex task, particularly when configuring a local development environment that closely resembles the production environment. Coordinating various technologies and databases can be quite challenging. However, Docker Compose simplifies this process, enabling the creation of a consistent and easily maintainable local development setup. This blog delves into the use of Docker Compose to establish a local development environment for a web application constructed with NestJS, Redis, and Postgres, showcasing the crucial role Docker plays in setting up open-source applications

Why Docker Compose?

Before diving into the details, let’s briefly discuss why Docker Compose is a valuable tool for local development. Docker Compose is a container orchestration tool that allows you to define and run multi-container Docker applications. It lets you define the services, networks, and volumes needed for your application in a single, declarative configuration file. This means you can encapsulate your entire development environment in code, making it easy to share and reproduce across different machines.

If you are looking for a more powerful and scalable container orchestration tool, I recommend using Kubernetes. However, if you are just getting started with containerization or you need a simple and easy-to-use tool, Docker Compose is a good option.

The Technology Stack

For our local development environment, we’ll be working with the following technologies:

  • NestJS: A powerful Node.js framework for building server-side applications, using TypeScript for strong typing and maintainability.
  • Redis: An in-memory data store often used for caching and session management in web applications.
  • PostgreSQL: A relational database management system that’s excellent for storing structured data.

Getting started

For this example, I will be using a template codebase that includes a pre-configured setup with TypeORM and PostgreSQL, an Authentication module with JWT, and a Swagger API client.

Prerequisites

Docker for Desktop must be installed on supported operating systems.

Step 1: Create a Dockerfile

A Dockerfile is essential for creating self-contained, consistent, and reproducible environments for applications within Docker containers. It specifies the exact steps to set up an application’s runtime environment, ensuring that it runs consistently across different systems and platforms. This level of control and predictability is crucial for efficient development, testing, and deployment in containerized environments.

Dockerfile

# Use a Node.js Alpine-based image for the development stage
FROM node:18-alpine

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy application dependency manifests to the container image
COPY package*.json ./

# Install application dependencies using `npm install`
RUN npm install

# Copy the rest of the application code to the container
COPY . .

# Build the application (if needed)
RUN npm run build

# Define the command to start your application in development mode
ENTRYPOINT ["/bin/sh", "-c", "npm run start:dev"]

Step 2: Create a Docker-Compose file

The docker-compose.yml file lays out the instructions on how to run image(s) into containers - exactly what we need to spin up a local development environment.

Let’s add the instructions in the docker-compose.yml file to spin up a local development environment (starting with just the NestJS app).

docker-compose.yml

services:
nest-backend:
container_name: nest-app
image: nest-api
build:
dockerfile: Dockerfile
context: .
ports:
- 5000:5000
networks:
- backend_network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped

Let’s highlight a few important parts of this file:

  • Defines a service named nest-backend to run a Docker container for a Nest.js application.
  • Specifies the container name as nest-app and the image as nest-apifor the container.
  • Uses a Dockerfile in the current context to build the container.
  • Maps port 5000 from the host to port 5000 within the container for accessing the application.
  • Connects the container to a network named backend_network for communication.
  • Sets up volumes to mount the current directory and node_modules into the container, allowing code changes and sharing dependencies.
  • Configures the container to restart automaticallyunless explicitly stopped.

Step 3: Add Postgres and PgAdmin service in docker-compose.yaml

services:
nest-backend:
container_name: nest-app
image: nest-api
build:
dockerfile: Dockerfile
context: .
ports:
- 5000:5000
environment:
- DB_TYPE=postgres
- DB_SCHEMA=public
- PG_HOST=postgres
- PG_USER=postgres
- PG_PASSWORD=postgres
- PG_DB=postgres
- PG_PORT=5432
networks:
- backend_network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
depends_on:
- postgres

postgres:
container_name: postgres-db
image: postgres:12
ports:
- 5432:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
networks:
- backend_network
restart: unless-stopped
volumes:
- postgres_data:/var/lib/postgresql/data

pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin@pgadmin.com
PGADMIN_DEFAULT_PASSWORD: admin
networks:
- backend_network
ports:
- '5050:80'
depends_on:
- postgres

networks:
backend_network:
driver: bridge

volumes:
postgres_data: {}
  • Defines two additional services: postgres for a PostgreSQL database and pgdmin for the pgAdmin tool.
  • Specifies the container names, images, and Dockerfiles for building containers. nest-backend depends on a Postgres container.
  • Sets environment variables for nest-backend to configure the connection to a PostgreSQL database.
  • Connects all services to a common network named backend_network for inter-container communication.
  • Defines volumes to persist data: postres_data for PostgreSQL database data.
  • PgAdmin is included for managing PostgreSQL and runs on port 5050.

The ormConfig.ts file would use all the environmental variables mentioned in docker-compose.yaml file.

src/config/ormConfig.ts

import { configDotenv } from 'dotenv';
import { User } from 'src/entities/user.entity';
import { TypeOrmModuleOptions } from '@nestjs/typeorm';
import { Topic } from 'src/entities/topic.entity';
import { Comment } from 'src/entities/comment.entity';

configDotenv();

export const PostgreSqlDataSource: TypeOrmModuleOptions = {
type: process.env.DB_TYPE
host: process.env.PG_HOST,
port: parseInt(process.env.PG_PORT),
username: process.env.PG_USER,
password: process.env.PG_PASSWORD,
database: process.env.PG_DB,
schema: process.env.DB_SCHEMA,
entities: [User, Topic, Comment],
autoLoadEntities: true,
synchronize: true,
logging: true,
};

Step 4: Setup CacheModule configurations and add Redis service

Here’s a CacheModule implementation applied globally to a NestJS app in the app.module.ts file:

src/app.module.ts

import { Module } from '@nestjs/common';
import { AppController } from './app.controller';
import { AppService } from './app.service';
import { MongooseModule } from '@nestjs/mongoose';
import { ConfigModule} from '@nestjs/config';
import { AuthModule } from './auth/auth.module';
import { UserModule } from './user/user.module';
import { PostgreSqlDataSource } from './config/ormConfig';
import { TypeOrmModule } from '@nestjs/typeorm';
import { CacheModule } from '@nestjs/cache-manager';
import * as redisStore from 'cache-manager-redis-store';

@Module({
imports: [
ConfigModule.forRoot({
isGlobal: true,
envFilePath: `.env`,
}),
CacheModule.register({
isGlobal: true,
store: redisStore,
host: 'redis',
port: 6379,
}),
TypeOrmModule.forRoot(PostgreSqlDataSource),
AuthModule,
UserModule,
],
controllers: [AppController],
providers: [AppService],
exports: [ConfigModule],
})
export class AppModule {}
  • CacheModule.register: This method registers and configures the caching module, indicating the choice of Redis as the caching store.
  • isGlobal: true: By setting isGlobal to true, the cache module becomes accessible application-wide, enabling any module within the application to access and utilize the caching functionality.
  • store: redisStore: The configuration specifies the use of Redis as the caching store.
  • host: 'redis' and port: 6379: These properties define the host and port of the Redis server. In this instance, the Redis server is configured to connect on the same Docker network, with the hostname "redis" and port 6379.

In terms of tweaking the docker-compose file to set up a local Redis server, you can make the following changes:

services:
nest-backend:
# ... other Nest container configs
depends_on:
- postgres
- redis

postgres:
# ... other postgres container configs

pgadmin:
# ... other pgadmin container configs

redis:
container_name: redis-db
image: redis
environment:
- REDIS_PORT=6379
ports:
- 6379:6379
networks:
- backend_network
restart: unless-stopped
volumes:
- redis:/data

networks:
backend_network:
driver: bridge

volumes:
postgres_data: {}
redis:
driver: local
  • Inside the nest-backend service, include redis as a depends_on dependency. This will ensure that the Redis container starts before the API container.
  • A new container is named redis which uses the Redis docker image.
  • The volumes configurations enable the cache to persist between container restarts

Step 5: Build the Dockerfile

Please note that in my setup using TypeScript version 5.1.3, enabling hot reload requires making specific configurations in the tsconfig.json file.

tsconfig.json

"watchOptions": {
// Use native file system events for files and directories
"watchFile": "priorityPollingInterval",
"watchDirectory": "dynamicprioritypolling",
// Poll files for updates more frequently
// when they're updated a lot.
"fallbackPolling": "dynamicPriority",
// Don't coalesce watch notification
"synchronousWatchDirectory": true,
// Finally, two additional settings for reducing the amount of possible
// files to track work from these directories
"excludeDirectories": ["**/node_modules", "dist"]
}

The complete docker-compose is:

docker-compose.yaml

services:
nest-backend:
container_name: nest-app
image: nest-api
build:
dockerfile: Dockerfile
context: .
ports:
- 5000:5000
environment:
- DB_TYPE=postgres
- DB_SCHEMA=public
- PG_HOST=postgres
- PG_USER=postgres
- PG_PASSWORD=postgres
- PG_DB=postgres
- PG_PORT=5432
networks:
- backend_network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
depends_on:
- postgres
- redis

redis:
container_name: redis-db
image: redis
environment:
- REDIS_PORT=6379
ports:
- 6379:6379
networks:
- backend_network
restart: unless-stopped
volumes:
- redis:/data

postgres:
container_name: postgres-db
image: postgres:12
ports:
- 5432:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
networks:
- backend_network
restart: unless-stopped
volumes:
- postgres_data:/var/lib/postgresql/data

pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin@pgadmin.com
PGADMIN_DEFAULT_PASSWORD: admin
networks:
- backend_network
ports:
- '5050:80'
depends_on:
- postgres

networks:
backend_network:
driver: bridge

volumes:
postgres_data: {}
redis:
driver: local

Now use docker-compose build --no-cache to ensure a completely clean and fresh build of your Docker containers.

A complete rebuild without using cached layers

After the build is complete, you can use docker-compose up to start your containers using the newly built images.

Docker containers are up and running
Docker Desktop shows all the running containers

Step 6: Access each service on the mapped port

On localhost:5000, we can access the NestJS server.

Swagger allows you to describe the structure of your APIs

Validating with Auth APIs:

Registering a user

Now finding the user using PgAdmin that can be accessed on localhost:5050 . Make sure to use the email and password mentioned in the docker-compose file.

User found on get query

In the Redis demo, I’ve implemented an API in the user.controller.ts file. Within this API, I make calls to another API, and I've implemented caching to store and retrieve the response.

src/user/user.controller.ts

  @Get('/:id')
async getPokemon(@Param('id') id: number) {
try {
const data = await this.userService.getPokemon(id);
return { success: true, data: data };
} catch (err) {
return { success: false, message: err.message };
}
}

src/user/user.service.ts

  async getPokemon(id: number): Promise<string> {
// check if data is in cache:
const cachedData = await this.cacheService.get<{ name: string }>(
id.toString(),
);
if (cachedData) {
console.log(`Getting data from cache!`);
return `${cachedData.name}`;
}

// if not, call API and set the cache:
const { data } = await this.httpService.axiosRef.get(
`https://pokeapi.co/api/v2/pokemon/${id}`,
);
await this.cacheService.set(id.toString(), data);
return await `${data.name}`;
}
Response without caching
Response with caching

Using Redis store, the API response time was reduced from 653 ms to 14 ms.

Conclusion

In this blog, we’ve explored how to set up a local development environment for a web application built with NestJS, Redis, and Postgres using Docker Compose. This approach simplifies the development workflow, improves consistency, and enhances portability. It’s a powerful tool for any developer looking to create a reliable, reproducible, and efficient local development environment.

By embracing Docker Compose, you can streamline your development process and focus on building high-quality applications.

Access the complete source code at https://github.com/Syed007Hassan/NestJs-Template.

If you enjoyed this article, please click on the clap button 👏 and share to help others find it!

Stackademic

Thank you for reading until the end. Before you go:

  • Please consider clapping and following the writer! 👏
  • Follow us on Twitter(X), LinkedIn, and YouTube.
  • Visit Stackademic.com to find out more about how we are democratizing free programming education around the world.

--

--