A comprehensive guide to dockerizing and deploying both React and Node.js applications on the same AWS EC2 instance, covering the complete workflow from local development to production deployment.


In this project, I aimed to consolidate my Terraform knowledge by demonstrating best practices while provisioning AWS resources. The source code for this project can be found here!

Exploring the three major memory management approaches in programming: Garbage Collection, Manual Memory Management, and Rust's Ownership system.
Recently, I had to deploy a MERN (MongoDB, Express.js, React, Node.js) application on an AWS EC2 instance. Despite putting in hours of research, I found a notable lack of comprehensive resources guiding the deployment of both the React application build and the Node.js server on the same instance with Dockerization.
In this blog, I will share the step-by-step process, along with the struggles and solutions I encountered, aiming to provide a thorough guide for fellow developers navigating the complexities of dockerizing and deploying a MERN stack application on AWS.
Let's dive in to explore the right way!
We need an Amazon EC2 instance with Docker running inside it, operating two containers.
— Architecture Overview
I'm expecting you already have a MERN Application or you can use this sample application: MERN-DEP.
This application is a simple MERN application that fetches the name of the user from a database and shows it on the screen. It was built just for demonstration purposes, but you can proceed with your own application.
Once your application is ready, test it on your localhost.
— Sample Application HomePage
Let's get the backend ready for deployment.
If you're using the sample application, add the MongoDB URL in the dbPopulator.js file and run it to load sample data in your database.
— Database Populator
— Database Population Result
Add a Dockerfile and .dockerignore to your Backend folder:
FROM node:17-alpine
WORKDIR /app
COPY ./package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
— Backend Dockerfile
Build the Docker Image:
docker build -t dockerhub_username/backend_image_name .
— Docker Build
Login to your DockerHub account via CLI:
— Docker Login
Push the Image to DockerHub:
docker push dockerhub_username/backend_image_name
— Docker Push
Verify your Docker Image on DockerHub:
— DockerHub Verification
Let's get the Frontend ready for deployment.
Create a file named nginx.conf inside the Frontend folder:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html;
}
location /api {
proxy_pass <InstanceIP>:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}Run the following command to create a build of your application:
npm run buildThis will create a dist folder containing the built code.
— Build Output
Create a Dockerfile inside the Frontend folder:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY ./dist/ /usr/share/nginx/htmlAdd your .env file to .dockerignore:
— Frontend Dockerfile
Build and push the Frontend Docker Image:
docker build -t dockerhub_username/frontend_image_name .
— Frontend Build
docker push dockerhub_username/frontend_image_name
— Frontend Push
Verify your Docker Image on DockerHub:
— DockerHub Frontend
Now, we are ready to create and configure the EC2 instance!
Login to your AWS Account.
Create a t2.micro instance using Ubuntu Image and create a new key pair or choose an existing one.
— EC2 Instance Creation
Add inbound rules for ports 80, 443 & 3000 in the security groups:
Instance → Security → Security Group → Edit Inbound Rules
— Security Group Rules
docker-compose.yml file:version: "3.8"
services:
backend:
image: <dockerhub_username>/<backend_image_name>
container_name: mernbackend
ports:
- "3000:3000"
environment:
- MONGODB_URI=<MongoDB_URI>
- PORT=3000
frontend:
image: <dockerhub_username>/<frontend_image_name>
container_name: mernfrontend
ports:
- "80:80"
environment:
- VITE_SERVER_URL="http://<InstanceIP>"Once you have the docker-compose.yml setup, run your application:
docker-compose up -dEnter your Instance's Public IP to see the hosted application:
— Running Application
To shut it down:
docker-compose downYou can create Bash Scripts to automate the complete build and deployment process or use CI/CD tools. Here's a simple bash script to update your server.
For demonstration purposes, I'm changing the text on the home screen from "Hello Name" to "Greetings Name".
Create a bash script in the root directory of your project named updateServers.sh:
#!/bin/bash
# UPDATE FRONTEND
cd ./Frontend
npm run build
docker build -t dockerhub_username/frontend_image_name .
docker push dockerhub_username/frontend_image_name
# UPDATE BACKEND
cd ../Backend
docker build -t dockerhub_username/backend_image_name .
docker push dockerhub_username/backend_image_name
# UPDATE SERVERS
cd ..
ssh -i <KeyPair file path> <Ip Address> ./updateServer.sh
# SAMPLE COMMAND
# ssh -i ./ayroids.pem ubuntu@ec2-13-200-251-36.ap-south-1.compute.amazonaws.com ./updateServer.shOn the EC2 Instance, create another bash script named updateServer.sh:
docker pull dockerhub_username/frontend_image_name
docker pull dockerhub_username/backend_image_name
docker-compose down
docker-compose up -dChange permission of both scripts to make them executable:
chmod +x updateServers.shRun the deployment trigger script on your local machine:
./updateServers.sh
— Script Execution
— Script Output
Visit the IP address of the Instance to verify updates:
— Updated Application
Boom! That's a simplified and efficient recipe for dockerizing and deploying MERN applications.
Thank you for joining me on this journey. I'd love to hear your thoughts and insights, so feel free to share your comments. Don't forget to hit the follow button to stay tuned for more lessons learned and experiences in the development world.
Happy learning!