Mastering AWS: Building Microservices and CI/CD Pipelines for Scalable Applications

Mastering AWS: Building Microservices and CI/CD Pipelines for Scalable Applications

Understanding Microservices and CI/CD: The Backbone of Modern Software Development

·

33 min read

Featured on Hashnode

First, let's understand what Microservices and CI/CD are. Microservices are an architectural style that structures an application as a collection of small, autonomous services modelled around a business domain. Each microservice is self-contained and implements a single business capability. They communicate with each other through well-defined APIs, often using HTTP/REST or messaging queues. This approach allows for independent development, deployment, and scaling of each service, leading to greater flexibility and resilience.

CI/CD stands for Continuous Integration and Continuous Deployment (or Continuous Delivery). It is a set of practices and tools designed to improve the process of software development, integration, and deployment.

  • Continuous Integration (CI) involves automatically integrating code changes from multiple contributors into a shared repository several times a day. Each integration is verified by an automated build and automated tests to detect integration errors as quickly as possible.

  • Continuous Deployment (CD) extends CI by automatically deploying all code changes to a production environment after passing the automated tests. Continuous Delivery, on the other hand, ensures that code changes are automatically prepared for a release to production but may require manual approval to deploy.

Together, Microservices and CI/CD enable rapid, reliable, and scalable software development and deployment, allowing teams to deliver new features and updates more frequently and with higher quality.

This project is divided into eight phases, each containing multiple tasks. This method helps organize and structure the project for better documentation.

PhaseDetail
1Analyze the design of the monolithic application and test the application.
2Create a development environment on AWS Cloud9, and check the monolithic source code into CodeCommit.
3Break the monolithic design into microservices, and launch test Docker containers.
4Create ECR repositories to store Docker images. Create an ECS cluster, ECS task definitions, and CodeDeploy application specification files.
5Create target groups and an Application Load Balancer that routes web traffic to them.
6Create ECS services.
7Configure applications and deployments groups in CodeDeploy, and create two CI/CD pipelines by using CodePipeline.
8Modify your microservices and scale capacity, and use the pipelines that you created to deploy iterative improvements to production by using a blue/green deployment strategy.

I have already provisioned the required infrastructure like an Amazon EC2 instance hosting the monolith application for Coffe Suppliers an Amazon RDS DB instance and also a VPC and its components like subnets, internet gateway, and security groups. Let's start with the project now!

Phase 1: Analyzing the infrastructure of the monolithic application

In this phase, we will analyze the current application infrastructure and then test the web application.

  • Verify that the monolithic web application is accessible from the internet.

  • Navigate to the Amazon EC2 console

  • Copy the Public IPv4 address of the MonolithicAppServer instance, and load it in a new browser tab.

Test the monolithic web application

Now we'll add data to the web application, test the functionality, and observe the different URL paths that are used to display the different pages. These URL paths are important to understand for when we divide this application into microservices later.

  • Choose the List of suppliers and notice the URL path includes /suppliers.

  • Let's add a new supplier, notice that the URL path includes /supplier-add.

  • Fill in all of the fields to create a supplier entry.

    After adding supplier data the URL path /suppliers look like

  • Now let's also perform the Update operation to edit an entry. On the page where we edit a supplier entry, notice that the URL path now includes supplier-update/1.

  • Modify the record in some way and save the change. Notice that the change was saved in the record.

  • Updated supplier information

Analyze how the monolithic application runs

  • Use EC2 Instance Connect to connect to the MonolithicAppServer instance.

  • Analyze how the application is running. In the terminal session, run the following command:

      sudo lsof -i :80
    

As we can notice the node daemon is using port 80 and protocol HTTP

  • Next, run the following command:
ps -ef | head -1; ps -ef | grep node

As we can also notice the user root on the EC2 instance is running the node process and the PID 450 matches the PID from the command we ran before.

Let's also analyze the structure of the application

The index.js file exists. It contains the base application logic.

Connect a MySQL client to the RDS database that the node application stores data in. Now we will find and copy the endpoint of the RDS database. To verify that the database can be reached from the EC2 instance on the standard MySQL port number, use the nmap -Pn command with the RDS database endpoint that we copied.

Now using the MYSQL client we'll connect to the RDS database:

mysql -h supplierdb.cduiwgaq89ef.us-east-1.rds.amazonaws.com -u admin -p

Let's now observe the data in the database. From the mysql> prompt, run SQL appropriate SQL commands to see that a database named COFFEE contains a table named suppliers. This table contains the supplier entry or entries that we added earlier when we tested the web application.

Phase 2: Creating a development environment and check code into a Git repository

In this phase, we will create a development environment by using AWS Cloud9. We will also check our application code in AWS CodeCommit, which is a Git-compatible repository.

Create an AWS Cloud9 instance that is named MicroservicesIDE and then open the IDE. It should run as a new EC2 instance of size t2.micro and run Amazon Linux 2023.

Copy the application code to your IDE

With the help of the scp command copy the application from the EC2 instance hosting a monolithic application to our development cloud9 environment. Make sure to change the key pair name and the IP address according to your specific ones. You also need to copy the keypair from your local machine to the cloud9 instance using the scp command before running the below command.

scp -r -i ~/environment/key.pem ubuntu@10.16.10.241:/home/ubuntu/resources/codebase_partner/* ~/environment/temp/

Create working directories with starter code for the two microservices

In this task, we will create areas in our development environment to support separating the application logic into two different microservices.

Based on the solution requirements of this project, it makes sense to split the monolithic application into two microservices. We will name the microservices customer and employee.

The following table explains the functionality that is needed for each microservice.

Primary UserMicroservice Functionality
CustomerThe customer microservice will provide the functionality that customers (the café franchise location managers who want to buy coffee beans) need. The customers need a read-only view of the contact information for the suppliers to be able to buy coffee beans from them. You can think of the café franchise location managers as customers of the application.
EmployeeThe employee microservice will provide the functionality that employees (the café corporate office employees) need. Employees need to add, modify, and delete suppliers who are listed in the application. Employees are responsible for keeping the listings accurate and up to date.

The employee microservice will eventually be made available only to employees. We will accomplish this by first encapsulating them as a separate microservice (in phases 3 and 4 of the project), and then later in phase 8 of the project, we will limit who can access the employee microservice.

In the microservices directory, create two new directories that are named customer and employee. Verify that your directory structure matches the following

  • Place a copy of the source code for the monolithic application in each new directory, and remove the files from the temp directory.

  • Delete the empty temp directory.

Create a Git repository for the microservices code and push the code to AWS CodeCommit

We now have directories named customer and employee, and each will contain a microservice. The two microservices will replicate the logic in our monolithic application. However, we will also be able to evolve the application functionality and to time the deployment of feature enhancements for these microservices separately.

We will benefit from checking this source code into a Git repository. In this project, we will use CodeCommit as our Git repository (repo).

Create a CodeCommit repository that is named microservices.

To check the unmodified application code into the microservices CodeCommit repository, run the following commands:

cd ~/environment/microservices
git init
git branch -m dev
git add.
git commit -m "two unmodified copies of the application code"
git remote add origin https://git-codecommit.us-east-1.amazonaws.com/v1/repos/microservices
git push -u origin dev

We need to create an repository on AWS CodeCommit

Phase 3: Configuring the application as two microservices and test them in Docker containers

In this phase, we will modify the two copies of the monolithic application starter code so that the application functionality is implemented as two separate microservices. Then, for initial testing purposes, we will run the containers on the same EC2 instance that hosts the AWS Cloud9 IDE that we are using. We will use this IDE to build the Docker images and launch the Docker containers.

Adjust the AWS Cloud9 instance security group settings

In this phase of the project, you will use the AWS Cloud9 instance as our test environment. We will run Docker containers on the instance to test the microservices that we will create. To be able to access the containers from a browser over the internet, we must open the ports that we will run the containers on.

  1. Adjust the security group of the AWS Cloud9 EC2 instance to allow inbound network traffic on TCP ports 8080 and 8081.

Click on the Security groups link to navigate to the security group associated with the Cloud9 Instance. Select the Edit Inbound rules to allow internet traffic on port 8080 and 8081.

Modify the source code of the customer microservice Create the customer microservice Dockerfile and launch a test container

I have edited the required files in the customer folder so that customers can only have a read-only view. Now, with the application code base and a Dockerfile, which we will create, we will build a Docker image. A Docker image is a template with instructions to create and run a Docker container.

In the customer directory, create a new file named Dockerfile that contains the following code:

FROM node:11-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install
EXPOSE 8080
CMD ["npm", "run", "start"]

This Dockerfile code specifies that an Alpine Linux distribution with Node.js runtime requirements should be used to create a Docker image. The code also specifies that the container should allow network traffic on TCP port 8080 and that the application should be run and started when a container that was created from the image is launched.

Build an image from the customer Dockerfile. Change to the customer directory and run the following command:

docker build --tag customer .

Verify that the customer-labeled Docker image was created. Run the following Docker command to list the Docker images that our Docker client is aware of.

docker images

Launch a Docker container that runs the customer microservice on port 8080. As part of the command, pass an environment variable to tell the node application the correct location of the database. Replace the <YOUR_RDS_ENDPOINT> with your actual RDS endpoint which you can get from the RDS console.

dbEndpoint=<YOUR_RDS_ENDPOINT>

Using the below command you can launch the Docker container from the newly created Docker image of the customer microservice.

docker run -d --name customer_1 -p 8080:8080 -e APP_DB_HOST="$dbEndpoint" customer

Load the microservice web page in a new browser tab at http://<cloud9-public-ip-address>:8080

The APP_DB_HOST is the environment variable that is defined in the customer microservice application code and is required. Instead of directly using the RDS database endpoint in the Docker run command for security purposes we generally inject the environment variables values using temporary shell variables like the dbEndpoint or the permanent environment variables by creating an .env file which contains all the environment variable values required by the application.

This is what our customer microservice looks like currently, it's running on a cloud9 instance inside a docker container.

Modify the source code of the employee microservice and Create the employee microservice Dockerfile and launch a test container

Create a Dockerfile for the employee microservice. Duplicate the Dockerfile from the customer microservice into the employee microservice area. Edit the employee/Dockerfile to change the port number on the EXPOSE line to 8081

Build the Docker image for the employee microservice. Specify employee as the tag. Run a container named employee_1 based on the employee image. Run it on port 8081 and be sure to pass in the database endpoint. Verify that the employee microservice is running in the container and that the microservice functions as intended. Load the microservice web page in a new browser tab at http://<cloud9-public-ip-address>:8081/admin/suppliers

Adjust the employee microservice port and rebuild the image

Verify that this view shows buttons to edit existing suppliers and to add a new supplier. Test adding a new supplier. Verify that the new supplier appears on the suppliers page. Test editing an existing supplier.

Verify that the edited supplier information appears on the suppliers page. Test deleting an existing supplier: On the row for a particular supplier entry, choose edit. At the bottom of the page, choose Delete this supplier, and then choose Delete this supplier again. Verify that the supplier no longer appears on the suppliers page.

Click on Add a new supplier

Now let's try to edit an entry and test whether it reflects or not on the website.

I changed the email id of Alex from to and it works fine.

Now, let's try to delete an entry and see if it works.

Adjust the employee microservice port and rebuild the image

When we tested the employee microservice on the AWS Cloud9 instance, we ran it on port 8081. However, when we deploy it to Amazon ECS, we will want it to run on port 8080. Edit the employee/index.js and employee/Dockerfile files to change the port from 8081 to 8080. Rebuild the Docker image for the employee microservice.

If you build an image with the name of an existing image, the existing image will be overwritten.

Check code into CodeCommit

Phase 4: Creating ECR repositories, an ECS cluster, task definitions, and AppSpec files

At this point, we have successfully implemented numerous solution requirements. We split the monolithic application into two microservices that can run as Docker containers. We have also verified that the containers support the needed application actions, such as adding, editing, and deleting entries from the database. The microservices architecture still uses Amazon RDS to store the coffee supplier entries.

However, our work isn't finished. There are more solution requirements to implement. The containers are able to run on the AWS Cloud9 instance, but that isn't a scalable deployment architecture. We need the ability to scale the number of containers that run on each microservice up and down depending on need. Also, we need to have a load balancer to route traffic to the appropriate microservice. Finally, we need to be able to easily update each application microservice's codebase independently and roll those changes into production. In the remaining phases of the project, we will work to accomplish these solution requirements.

Create ECR repositories and upload the Docker images

Now, we have to upload the latest Docker images of the two microservices to separate Amazon ECR repositories.

To authorize our Docker client to connect to the Amazon ECR service, run the following commands:

account_id=$(aws sts get-caller-identity |grep Account|cut -d '"' -f4)
echo $account_id

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $account_id.dkr.ecr.us-east-1.amazonaws.com

A message in the command output indicates that the login succeeded. Create a separate private ECR repository for each microservice. Name the first repository customer. Name the second repository employee.

Set permissions on the customer and employee ECR repository. For information about editing the existing JSON. Replace the existing lines in the policy with the following:

{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "ecr:*"
    }
   ]
}

Repeat the same steps for the employee repository.

Tag the Docker images with your unique registryId (account ID) value to make it easier to manage and keep track of these images. In the AWS Cloud9 IDE, run the following commands:

account_id=$(aws sts get-caller-identity |grep Account|cut -d '"' -f4)

# Verify that the account_id value is assigned to the $account_id variable
echo $account_id

# Tag the customer image
docker tag customer:latest $account_id.dkr.ecr.us-east-1.amazonaws.com/customer:latest

# Tag the employee image
docker tag employee:latest $account_id.dkr.ecr.us-east-1.amazonaws.com/employee:latest

The output of the command should be similar to the following image. Notice that the latest tag was applied and that the image names now include the remote repository name where you intend to store it.

Run the appropriate docker command to push each of the Docker images to Amazon ECR.

docker push $account_id.dkr.ecr.us-east-1.amazonaws.com/customer:latest
docker push $account_id.dkr.ecr.us-east-1.amazonaws.com/employee:latest

Now let's confirm that the two images are now stored in Amazon ECR and that each has the latest label applied.

Create an ECS cluster and create a CodeCommit repository to store deployment files

Create a serverless AWS Fargate cluster that is named microservices-serverlesscluster.

In this task, we will create another CodeCommit repository. This repository will store the task configuration specification files that Amazon ECS will use for each microservice. The repository will also store AppSpec specification files that CodeDeploy will use for each microservice.

  1. Create a new CodeCommit repository that is named deployment to store deployment configuration files.

  2. In AWS Cloud9, in the environment directory, create a new directory that is named deployment. Initialize the directory as a Git repository with a branch called dev.

Create task definition files for each microservice and register them with Amazon ECS

In this task, we will create a task definition file for each microservice and then register the task definitions with Amazon ECS. In the new deployment directory, create an empty file named taskdef-customer.jsonEdit the taskdef-customer.json file.

Paste the following JSON code into the file. Replace the <RDS-ENDPOINT> and <ACCOUNT-ID> with yours.

{
    "containerDefinitions": [
        {
            "name": "customer",
            "image": "customer",
            "environment": [
                {
                    "name": "APP_DB_HOST",
                    "value": "<RDS-ENDPOINT>"
                }
            ],
            "essential": true,
            "portMappings": [
                {
                    "hostPort": 8080,
                    "protocol": "tcp",
                    "containerPort": 8080
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                   "awslogs-create-group": "true",
                    "awslogs-group": "awslogs-cafe-CICD",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "awslogs-cafe-CICD"
                }
            }
        }
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "networkMode": "awsvpc",
    "cpu": "512",
    "memory": "1024",
    "executionRoleArn": "arn:aws:iam::<ACCOUNT-ID>:role/PipelineRole",
    "family": "customer-microservice"
}

To register the customer microservice task definition in Amazon ECS, run the following command:

aws ecs register-task-definition --cli-input-json "file:///home/ec2-user/environment/deployment/taskdef-customer.json"

Repeat the same steps for employee microservice. Edit the taskdef-employee.json file. Paste the following JSON code into the file. Replace the <RDS-ENDPOINT> and <ACCOUNT-ID> with yours.

{
    "containerDefinitions": [
        {
            "name": "employee",
            "image": "employee",
            "environment": [
                {
                    "name": "APP_DB_HOST",
                    "value": "<RDS-ENDPOINT>"
                }
            ],
            "essential": true,
            "portMappings": [
                {
                    "hostPort": 8080,
                    "protocol": "tcp",
                    "containerPort": 8080
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                   "awslogs-create-group": "true",
                    "awslogs-group": "awslogs-cafe-CICD",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "awslogs-cafe-CICD"
                }
            }
        }
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "networkMode": "awsvpc",
    "cpu": "512",
    "memory": "1024",
    "executionRoleArn": "arn:aws:iam::<ACCOUNT-ID>:role/PipelineRole",
    "family": "employee-microservice"
}

To register the employee microservice task definition in Amazon ECS, run the following command:

aws ecs register-task-definition --cli-input-json "file:///home/ec2-user/environment/deployment/taskdef-employee.json"

In the Amazon ECS console, verify that the customer-microservice and employee-microservice task definitions now appear in the Task definitions pane.

Create AppSpec files for CodeDeploy for each microservice

In this task, you will continue to complete tasks to support deploying the microservices-based web application to run on an ECS cluster where the deployment is supported by a CI/CD pipeline. In this specific task, you will create two application specification (AppSpec) files, one for each microservice. These files will provide instructions for CodeDeploy to deploy the microservices to the Amazon ECS on Fargate infrastructure.

Create an AppSpec file for the customer microservice. In the deployment directory, create a new file named appspec-customer.yaml Paste the following YAML code into the file:

Important: DON'T modify <TASK_DEFINITION>. This setting will be updated automatically when the pipeline runs.

version: 0.0
Resources:
  - TargetService:
      Type: AWS::ECS::Service
      Properties:
        TaskDefinition: <TASK_DEFINITION>
        LoadBalancerInfo:
          ContainerName: "customer"
          ContainerPort: 8080

In the same directory, create an AppSpec file for the employee microservice. Name the file appspec-employee.yaml. The contents of the file should be the same as the appspec-customer.yaml file.

version: 0.0
Resources:
  - TargetService:
      Type: AWS::ECS::Service
      Properties:
        TaskDefinition: <TASK_DEFINITION>
        LoadBalancerInfo:
          ContainerName: "employee"
          ContainerPort: 8080

Update files and check them into CodeCommit

In this task, we will update the two task definition files. Then, we will push the four files that you created in the last two tasks into the deployment repository.

Edit the taskdef-customer.json file and taskdef-employee.json.

Modify line 5 to match the following line:

"image": "<IMAGE1_NAME>",

Note: <IMAGE1_NAME> is not a valid image name, which is why we originally set the image name to customer and employee before running the AWS CLI command to register the first revision of the file with Amazon ECS. However, at this point in the project, it's important to set the image value to a placeholder text value. Later in this project, when you configure a pipeline, you will identify IMAGE1_NAME as placeholder text that can be dynamically updated. In summary, CodePipeline will set the correct image name dynamically at runtime.

Push all four files to CodeCommit.

Note: Pushing the latest files to CodeCommit is essential. Later, when we create the CI/CD pipeline, the pipeline will pull these files from CodeCommit and use the details in them as instructions to deploy updates for our microservices to the Amazon ECS cluster.

Phase 5: Creating target groups and an Application Load Balancer

In this phase, we will create an Application Load Balancer, which provides an endpoint URL. This URL will act as the HTTPS entry point for customers and employees to access our application through a web browser. The load balancer will have listeners, which will have routing and access rules that determine which target group of running containers the user request should be directed to.

Create four target groups

In this task, we will create four target groups—two for each microservice. Because we will configure a blue/green deployment, CodeDeploy requires two target groups for each deployment group.

Blue/green is a deployment strategy where we create two separate but identical environments. One environment (blue) runs the current application version, and one environment (green) runs the new application version.

Let's create the first and second target groups for the customer microservice named as customer-tg-one and customer-tg-two.

For now, skip the Register Targets section

Repeat the same steps for the customer-tg-two target group.

Now let's also create target groups for employee micorservice.

Repeat the same steps for the employee-tg-two target group. The only difference between the customer and employee microservice target group configuration is that for the employee target group, the health check path is /admin/suppliers

Create a security group and an Application Load Balancer, and configure rules to route traffic

In this task, we will create an Application Load Balancer. We will also define two listeners for the load balancer: one on port 80 and another on port 8080. For each listener, you will then define path-based routing rules so that traffic is routed to the correct target group depending on the URL that a user attempts to load.

Create a new EC2 security group named microservices-sg to use in Project VPC. Add inbound rules that allow TCP traffic from any IPv4 address on ports 80 and 8080.

In the Amazon EC2 console, create an Application Load Balancer named microservicesLB. Make it internet-facing for IPv4 addresses. Use Project VPC, Public Subnet1, Public Subnet2, and the microservices-sg security group.

Configure two listeners on it. The first should listen on HTTP:80 and forward traffic to customer-tg-two by default. The second should listen on HTTP:8080 and forward traffic to customer-tg-one by default.

Add a second rule for the HTTP:80 listener. Define the following logic for this new rule: IF Path is /admin/* THEN Forward to... the employee-tg-two target group.

Add a second rule for the HTTP:8080 listener. Define the following logic for this new rule: IF Path is /admin/* THEN Forward to the employee-tg-one target group.

Phase 6: Creating two Amazon ECS services

In this phase, we will create a service in Amazon ECS for each microservice. Although we could deploy both microservices to a single ECS service, for this project, it will be easier to manage the microservices independently if each is deployed to its own ECS service.

Create the ECS service for the customer microservice

In AWS Cloud9, create a new file named customer-microservice-tg-two.json in the deployment directory. Paste the following JSON code into the file:

Replace the REVISION-NUMBER, MICROSERVICE-TG-TWO-ARN, PUBLIC-SUBNET-1-ID, PUBLIC-SUBNET-2-ID, and SECURITY-GROUP-ID as per your ones.

{
    "taskDefinition": "customer-microservice:REVISION-NUMBER",
    "cluster": "microservices-serverlesscluster",
    "loadBalancers": [
        {
            "targetGroupArn": "MICROSERVICE-TG-TWO-ARN",
            "containerName": "customer",
            "containerPort": 8080
        }
    ],
    "desiredCount": 1,
    "launchType": "FARGATE",
    "schedulingStrategy": "REPLICA",
    "deploymentController": {
        "type": "CODE_DEPLOY"
    },
    "networkConfiguration": {
        "awsvpcConfiguration": {
            "subnets": [
                "PUBLIC-SUBNET-1-ID",
                "PUBLIC-SUBNET-2-ID"
            ],
            "securityGroups": [
                "SECURITY-GROUP-ID"
            ],
            "assignPublicIp": "ENABLED"
        }
    }
}

Then run the following commands:

cd ~/environment/deployment
aws ecs create-service --service-name customer-microservice --cli-input-json file://customer-microservice-tg-two.json

Remember to commit and push the changes to the CodeCommit deployment repository.

Create the Amazon ECS service for the employee microservice

Create an Amazon ECS service for the employee microservice. Copy the JSON file that you created for the customer microservice, and name it create-employee-microservice-tg-two.json. Save it in the same directory.

Modify the employee-microservice-tg-two.json file. On line 2, change customer-microservice to employee-microservice and also update the revision number. On line 6, enter the ARN of the employee-tg-two target group.

Tip: Don't just change customer to employee on this line. The ARN is unique in other ways. On line 7, change customer to employee. Save the changes.

Once the changes are done according to the employee microservice run the following commands:

aws ecs create-service --service-name employee-microservice --cli-input-json file://employee-microservice-tg-two.json

Phase 7: Configuring CodeDeploy and CodePipeline

Now that we have defined the Application Load Balancer, target groups, and the Amazon ECS services that comprise the infrastructure to which we will deploy our microservices, the next step is to define the CI/CD pipeline to deploy the application.

The pipeline will be invoked by updates to CodeCommit, where we have stored the ECS task definition files and the CodeDeploy AppSpec files. The pipeline can also be invoked by updates to one of the Docker image files that we have stored in Amazon ECR. When invoked, the pipeline will call the CodeDeploy service to deploy the updates. CodeDeploy will take the necessary actions to deploy the updates to the green environment. Assuming that no errors occur, the new task set will replace the existing task set.

Create a CodeDeploy application and deployment groups

A CodeDeploy application is a collection of deployment groups and revisions. A deployment group specifies an Amazon ECS service, load balancer, optional test listener, and two target groups. A group specifies when to reroute traffic to the replacement task set, and when to terminate the original task set and Amazon ECS application after a successful deployment.

Use the CodeDeploy console to create a CodeDeploy application with the name microservices that uses Amazon ECS as the compute platform.

Now, create a CodeDeploy deployment group for the customer microservice.

Let's also create a CodeDeploy deployment group for the employee microservice

Create a pipeline for the customer microservice

In this task, we will create a pipeline to update the customer microservice. When we first define the pipeline, we will configure CodeCommit as the source and CodeDeploy as the service that is responsible for deployment. We will then edit the pipeline to add the Amazon ECR service as a second source.

With an Amazon ECS blue/green deployment, which we will specify in this task, we provision a new set of containers, which CodeDeploy installs the latest version of our application on. CodeDeploy then reroutes load balancer traffic from an existing set of containers, which run the previous version of our application, to the new set of containers, which run the latest version. After traffic is rerouted to the new containers, the existing containers can be terminated. With a blue/green deployment, we can test the new application version before sending production traffic to it.

In the CodePipeline console, create a customer pipeline

Skip the build step

Choose Next review the configurations and select Create Pipeline. After we create the pipeline, it will immediately start to run and will eventually fail on the Deploy stage. Ignore that for now and continue to the next step.

Edit the update-customer-microservice pipeline to add another source Edit the source stage

Select the Add an Action button

Edit the deploy action of the update-customer-microservice pipeline

Select the pencil icon for the Deploy action

  • On the Deploy Amazon ECS (Blue/Green) card, choose the edit (pencil) icon.

  • Under Input artifacts, choose Add and then choose image-customer and click on Done

  • Under Dynamically update task definition image, for Input artifact with image details, choose image-customer.

  • For Placeholder text in the task definition, enter IMAGE1_NAME

Recall that in a previous phase, we entered the IMAGE1_NAME placeholder text in the taskdef-customer.json file before we pushed it to CodeCommit. In this current task, you configured the logic that will replace the placeholder text with the actual image name that the source phase of the CodePipeline returns.

NOTE: Remember to save the pipeline changes.

Test the CI/CD pipeline for the customer microservice

In this task, we will test that the CI/CD pipeline for the customer microservice functions as intended.

Launch a deployment of the customer microservice on Amazon ECS on Fargate. Navigate to the CodePipeline console.

On the Pipelines page, choose the link for the pipeline that is named update-customer-microservice. To force a test of the current pipeline settings, choose Release change, and then choose Release.

You should able to see an interface like this once the source stage has run successfully

Observe the progress in CodeDeploy

Load the customer microservice in a browser tab and test it. Locate the DNS name value of the microservicesLB load balancer, and paste it into a new browser tab.

Recall that our load balancer has two listeners: one on port 80 and another on port 8080. Port 8080 is where the replacement task set will run for the first 5 minutes. Therefore, if we load the :80 URL within the first 5 minutes, the customer microservice page might not load, but you should already see the page at 8080. Then, after 5 minutes, you should see that the microservice is available at both ports.

When we click on the Administrator link we get the 503 error

Observe the load balancer and target group settings. In the Amazon EC2 console, choose Target Groups.

You might notice that the customer-tg-two target group is no longer associated with the load balancer. This is because CodeDeploy is managing the load balancer listener rules and might have determined that some of the target groups are no longer needed.

Observe the HTTP:80 listener rules.

The default rule has changed here. The default "if no other rule applies" rule previously pointed to customer-tg-two, but now it points to customer-tg-one. This is because CodeDeploy actively managed our Application Load Balancer.

Observe the HTTP:8080 listener rules.

The two rules still forward to the "one" target groups.

We have successfully deployed one of the two microservices to Amazon ECS on Fargate by using a CI/CD pipeline.

Create a pipeline for the employee microservice

In this task, we will create the pipeline for the employee microservice.

Follow the same steps that we followed to create an update-customer-microservice. Select the taskdef-employee.json for employee ECS task definition and appspec-employee.yaml for employee AWS CodeDeploy AppSpec file.

Test the CI/CD pipeline for the employee microservice

In this task, we will test the pipeline that you just defined for the employee microservice.

Launch a deployment of the employee microservice on Amazon ECS on Fargate.

Use the release change feature to force a test of the pipeline. Follow the progress in CodeDeploy. Within a few minutes, if everything was configured correctly, all of the Deployment lifecycle events should succeed.

Load the employee microservice in a browser tab. In the browser tab where the customer microservice is running, choose the Administrator link. Choose List of suppliers or Suppliers list.

The suppliers page should load. This version of the page should not have the edit or add supplier buttons. All links in the café web application should now work because you have now deployed both microservices. Observe the running tasks in the Amazon ECS console.

The Deployments and tasks status will change as the blue/green deployment advances through its lifecycle events. Return to the CodeDeploy page to confirm that all five steps of the deployment succeeded and the replacement task set is now serving traffic.

We have successfully deployed the employee microservice to Amazon ECS on Fargate by using a CI/CD pipeline.

Observe how CodeDeploy modified the load balancer listener rules

Observe the load balancer and target group settings. In the Amazon EC2 console, choose Target Groups. Notice that the customer-tg-two target group is no longer associated with the load balancer. This is because CodeDeploy is managing the load balancer listener rules.

Note: If you are repeating this step, the target groups that are currently attached and unattached might be different. Observe the HTTP:80 listener rules.

The default rule has changed here. For the default "If no other rule applies" rule, the "forward to target group" previously pointed to customer-tg-two, but now it points to customer-tg-one. Observe the HTTP:8080 listener rules. The two rules still forward to the "one" target groups.

Phase 8: Adjusting the microservice code to cause a pipeline to run again

In this phase, we will experience the benefits of the microservices architecture and the CI/CD pipeline that we built. We will begin by adjusting the load balancer listener rules that are related to the employee microservice. We will also update the source code of the employee microservice, generate a new Docker image, and push that image to Amazon ECR, which will cause the pipeline to run and update the production deployment. We will also scale up the number of containers that support the customer microservice.

Limit access to the employee microservice

In this task, we will limit access to the employee microservice to only people who try to connect to it from a specific IP address. By limiting the source IP to a specific IP address, only users who access the application from that IP can access the pages, and edit or delete supplier entries.

Confirm that all target groups are still associated with the Application Load Balancer. In the Amazon EC2 console, check that all four target groups are still associated with the load balancer. Reassociate target groups as needed before going to the next step.

To discover your public IPv4 address.

Tip: One resource you can use to do this is https://www.whatismyip.com

Edit the rules for the HTTP:80 listener.

For the rule that currently has "IF Path is /admin/*" in the details, add a second condition to route the user to the target groups only if the source IP of the request is your IP address.

Tip: For the source IP, paste in your public IPv4 address and then add /32.

Edit the rules for the HTTP:8080 listener. Edit the rules in the same way that we edited the rules for the HTTP:80 listener. We want access to the employee target groups to be limited to our IP address.

Adjust the UI for the employee microservice and push the updated image to Amazon ECR

In this task, you will adjust the deployed microservices. To generate a new Docker image from the employee microservice source files that we modified and to label the image, run the following commands:

docker rm -f employee_1 
cd ~/environment/microservices/employee
docker build --tag employee .
dbEndpoint=$(cat ~/environment/microservices/employee/app/config/config.js | grep 'APP_DB_HOST' | cut -d '"' -f2)
echo $dbEndpoint
account_id=$(aws sts get-caller-identity |grep Account|cut -d '"' -f4)
echo $account_id
docker tag employee:latest $account_id.dkr.ecr.us-east-1.amazonaws.com/employee:latest

Push an updated image to Amazon ECR so that the update-employee-microservice pipeline will be invoked.

In the CodePipeline console, navigate to the details page for the update-employee-microservice pipeline. Keep this page open.

To push the new employee microservice Docker image to Amazon ECR, run the following commands in your AWS Cloud9 IDE:

#refresh credentials in case needed
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $account_id.dkr.ecr.us-east-1.amazonaws.com

#push the image

docker push $account_id.dkr.ecr.us-east-1.amazonaws.com/employee:latest

At least one layer should indicate that it was pushed, which indicates that the image was modified since it was last pushed to Amazon ECR. We could also look in the Amazon ECR repository to confirm the last modified timestamp of the image that is tagged as the latest.

Confirm that the employee pipeline ran and the microservice was updated

Observe the update-employee-microservice pipeline details in the CodePipeline console.

Notice that when you uploaded a new Docker image to Amazon ECR, the pipeline was invoked and ran. Note that the pipeline might take a minute or two to notice that the Docker image was updated before the pipeline is invoked.

Observe the details in the CodeDeploy console.

Test access to the employee microservice

In this task, we will test access to the employee microservice.

Test access to the employee microservice pages at http://<alb-endpoint>/admin/suppliers and http://<alb-endpoint>:8080/admin/suppliers from the same device that you have used for this project so far. Replace <alb-endpoint> with the DNS name of the microservicesLB load balancer.

At least one of the two pages should load successfully.

Notice that the banner with the page title is a light color now because of the change that you made to the nav.html file. Pages that are hosted by the customer microservice still have the dark banner. This demonstrates that by using a microservices architecture, you could independently modify the UI or features of each microservice without affecting others.

Test access to the same employee microservice pages from a different device.

For example, you could use your phone to connect from the cellular network and not the same Wi-Fi network that your computer uses. We want the device to use a different IP address to connect to the internet than your computer.

We should get a 404 error on any page that loads and the page should say "Coffee suppliers" instead of "Manage coffee suppliers." This is evidence that we cannot successfully connect to the employee microservice from another IP address.

Tip: If you don't have another network available, run the following command in the AWS Cloud9 terminal: curl http://<alb-endpoint>/admin/suppliers. The source IP address of the AWS Cloud9 instance is different than your browser's source IP. The result should include <p class="lead">Sorry, we don't seem to have that page in stock</p>.

This proves that the updated rules on the load balancer listener are working as intended.

Scale the customer microservice

In this task, we will scale up the number of containers that run to support the customer microservice. We can make this change without causing the update-customer-microservice pipeline to run.

Update the customer service in Amazon ECS. Run the following command:

aws ecs update-service --cluster microservices-serverlesscluster --service customer-microservice --desired-count 3

Conclusion

In conclusion, this AWS tutorial has provided a comprehensive guide to building microservices and setting up a CI/CD pipeline. By breaking down a monolithic application into microservices, we have achieved greater flexibility, scalability, and resilience. The step-by-step phases, from analyzing the monolithic application to deploying microservices using Docker containers, Amazon ECS, and CodePipeline, have demonstrated the practical implementation of modern software development practices. This approach not only enhances the efficiency of development and deployment processes but also ensures that applications can be updated and scaled independently, leading to more robust and maintainable systems. By following this tutorial, developers can leverage AWS services to create a streamlined and automated workflow, ultimately delivering high-quality software more rapidly and reliably.

Comment your thoughts below. If you enjoyed this blog or learned something interesting, stay tuned for more awesome content!

Let's stay in touch:

Connect with me on LinkedIn to stay updated! 🔗 Follow me on Hashnode for more such content ✅

Â