Jenkins and GitOps: The Complete Guide to DevSecOps CI/CD

Jenkins and GitOps: The Complete Guide to DevSecOps CI/CD

Discover how to set up a robust DevSecOps CI/CD pipeline using Jenkins and GitOps. This comprehensive guide covers everything from code commits to se

·

27 min read

Let's dive into a brief introduction about the project we are going to work on. Once a developer pushes the code into an SCM (source code management system) like Git and GitHub (source code management service), the code will be pushed into an application repository. Jenkins will then be notified of the new source code update and will automatically build and test the application. In our case, we will use Maven to build a Java application. Then, we will analyze the application code using a static code analysis tool called SonarQube. This will help us identify any potential code smells and security issues, allowing developers to remediate any potential problems before the code is pushed into the production environment.

Next, we'll build and push the Docker image of our application to the DockerHub registry. Simultaneously, we'll scan the Docker image with Trivy using static code analysis (SCA). This will help us identify any additional vulnerabilities within the container, serving as another security check before the image is pushed into our production Kubernetes cluster. Afterward, we'll update a set of Kubernetes manifest files. These manifest files, which include a deployment manifest and a service manifest, will reside in a separate GitHub repository and will be updated with each successful pipeline build to reflect the latest version of the tag.

Once the tag is updated, ArgoCD, a continuous delivery or continuous deployment application, will monitor that repository for any changes. As soon as a new tag is detected, it will deploy the latest version of the tag into our Kubernetes cluster. We'll also send a Slack notification if the deployment and build are successful, and notifications will be sent if there are any issues during the build steps. We'll accomplish this using a declarative pipeline in Jenkins.

Set up Jenkins Nodes

The first step is to set up Jenkins. We'll be using individual virtual machines, although it's possible to install all the necessary tools on a single VM/node. To closely simulate a production environment, I'll use separate VMs.

We'll utilize AWS as our cloud platform, but other public cloud vendors like Microsoft Azure and Google Cloud Platform are also viable options. Specifically, we'll use the Amazon EC2 service.

As a best practice, we will not run any jobs on the Jenkins controller VM. Instead, we'll use a separate VM as a Jenkins agent node, which we'll configure using SSH. Let's launch two EC2 instances to host the Jenkins controller and Jenkins agent node on separate VMs with the following configurations:

As you may have noticed, the security group inbound rules are 'too open.' We will narrow them down in the next few steps. To avoid confusion in the future name the VMs appropriately.

SSH into both VMs using any terminal of your choice.

Run the following commands on both VMs, First Update Package Repository and Upgrade Packages

sudo -i
sudo apt -y update
sudo apt -y upgrade

In this tutorial, we'll be using Adoptium Java 17 but you can also open Jdk if you prefer. Let's Add the Adoptium repository

wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | tee /etc/apt/keyrings/adoptium.asc
echo "deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | tee /etc/apt/sources.list.d/adoptium.list

Now update the repository and install Java 17 on both the VMs:

apt update
apt install temurin-17-jdk
/usr/bin/java --version
exit

As you can see in the image below Java 17 is installed successfully!

Let's also install the LTS version of Jenkins on both VMs. First, add the repository key to the system:

sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
  https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
  https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins

Let’s start Jenkins by using systemctl:

sudo systemctl start jenkins

Since systemctl doesn’t show the status output, we’ll use the status command to check if Jenkins started successfully:

sudo systemctl status jenkins

If everything went well, the beginning of the status output shows that the service is active and configured to start at boot:

Access Jenkins User Interface

To set up your installation, visit Jenkins on its default port, 8080, using your server domain name or IP address: http://your_server_ip_or_domain:8080

Now, we have to retrieve the initial default password using the below command

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Select the Install suggested Plugins tab

Install Nginx and Set up a Reverse Proxy

Now we will install Nginx and set up a reverse proxy so we can access Jenkins through a domain name.

Nginx is one of the most popular web servers in the world and is responsible for hosting some of the largest and highest-traffic sites on the internet. It is a lightweight choice that can be used as either a web server or reverse proxy.

Update the Package Repository and Upgrade the Packages

sudo apt update
sudo apt upgrade

Installing Nginx

sudo apt install nginx

Once Nginx is installed we'll be able to access our server using its IP address. Before installing Nginx we were accessing our Jenkins application directly on port 8080

Checking our Web Server

We can check with the systemd init system to make sure the service is running by typing:

systemctl status nginx

Check your Web Server is running

Access your web server by visiting http://your_server_ip in our case it can be http://44.211.124.107/

Next, we have to configure Nginx to point to Jenkins through a reverse proxy. For Nginx to serve this content, it’s necessary to create a server block with the correct directives.

sudo vim /etc/nginx/sites-available/dev.ui.jenkins.iseasy.tw

Paste in the following configuration block, which is similar to the default, but updated for our new directory and domain name. You have to change your domain name in my case I'm using the domain name that I've already created a DNS record for.

upstream jenkins{
    server 127.0.0.1:8080;
}

server{
    listen      80;
    server_name dev.ui.jenkins.iseasy.tw;

    access_log  /var/log/nginx/jenkins.access.log;
    error_log   /var/log/nginx/jenkins.error.log;

    proxy_buffers 16 64k;
    proxy_buffer_size 128k;

    location / {
        proxy_pass  http://jenkins;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_redirect off;

        proxy_set_header    Host            $host;
        proxy_set_header    X-Real-IP       $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto https;
    }

}

Next, let’s enable the file by creating a link from it to the sites-enabled directory, which Nginx reads from during startup:

sudo ln -s /etc/nginx/sites-available/dev.ui.jenkins.iseasy.tw /etc/nginx/sites-enabled/

Next, test to make sure that there are no syntax errors in any of your Nginx files:

sudo nginx -t

If there aren’t any problems, restart Nginx to enable your changes:

sudo systemctl restart nginx

Nginx should now be serving Jenkins from your domain name. You can test this by navigating to http://your_domain Before this let's first configure the Jenkins using the original http://server_ip:8080 and create our first Jenkins admin user.

Since we've already created a domain name that is mapped to our Jenkins we can change this by default it's going to use the domain name to access Jenkins.

Configure Jenkins with SSL Using an Nginx Reverse Proxy

Add a certificate to secure our installation. By default, Jenkins comes with its own built-in Winstone web server listening on port 8080, which is convenient for getting started. It’s also a good idea, however, to secure Jenkins with SSL to protect passwords and sensitive data transmitted through the web interface.

Prerequisites

  • Jenkins installed

  • An A record pointing to your server’s public IP address.

Installing Certbot

The first step to using Let’s Encrypt to obtain an SSL certificate is to install the Certbot software on our server.

sudo apt install certbot python3-certbot-nginx

Confirming Nginx’s Configuration

Certbot needs to be able to find the correct server block in our Nginx configuration for it to be able to automatically configure SSL. Specifically, it does this by looking for a server_name directive that matches the domain you request a certificate for.

sudo vim /etc/nginx/sites-available/dev.ui.jenkins.iseasy.tw

Find the existing server_name line. It should look like this:

...
server_name dev.ui.jenkins.iseasy.tw;
...

If it does, exit your editor and move on to the next step. If not review the installing Nginx step.

Obtaining an SSL Certificate

Certbot provides a variety of ways to obtain SSL certificates through plugins. The Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary. To use this plugin, type the following:

sudo certbot --nginx -d dev.ui.jenkins.iseasy.tw

If that’s successful, certbot will ask how you’d like to configure your HTTPS settings. Select your choice then hit ENTER. The configuration will be updated, and Nginx will reload to pick up the new settings. certbot will wrap up with a message telling you the process was successful and where your certificates are stored.

Now we should be able to hit our Jenkins installation on the secure URL. We have a security lock and we have a valid certificate.

Verifying Certbot Auto-Renewal

Let’s Encrypt’s certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The certbot package we installed takes care of this for us by adding a systemd timer that will run twice a day and automatically renew any certificate that’s within thirty days of expiration.

You can query the status of the timer with systemctl

sudo systemctl status certbot.timer

To test the renewal process, you can do a dry run with certbot:

sudo certbot renew --dry-run

If you see no errors, you’re all set. When necessary, Certbot will renew your certificates and reload Nginx to pick up the changes. If the automated renewal process ever fails, Let’s Encrypt will send a message to the email you specified, warning you when your certificate is about to expire.

Nginx should now be serving your domain name. You can test this by navigating to https://your_domain

Now we need to configure a Jenkins agent to execute our jobs. It's a best practice to not run any jobs on the machine that's running the user interface which I refer to as the Jenkins controller node.

There are lots of different ways that we can do this we can have a dedicated VM that runs jobs we can also use Docker. This can also be installed inside of a Kubernetes cluster. For the sake of this project, we'll use a separate VM that will act as the agent node and we'll configure that using SSH.


Configuring Jenkins Agent Node

Prerequisites

  • Virtual Machine running Ubuntu 22.04 or newer

Update Package Repository and Upgrade Packages

sudo apt update
sudo apt upgrade

Create Jenkins User

sudo adduser jenkins
passwd jenkins

Grant Sudo Rights to Jenkins User

sudo usermod -aG sudo jenkins

Logout and ssh back in as user Jenkins

Switch to the root user

sudo bash

Add Adoptium repository


wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | tee /etc/apt/keyrings/adoptium.asc
echo "deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | tee /etc/apt/sources.list.d/adoptium.list

Install Java 17

apt update
apt install temurin-17-jdk
update-alternatives --config java
/usr/bin/java --version
exit

Since we are going to build and push Docker Images we have to make sure that Docker is installed on the Jenkins agent node.

Install using the repository

Update the apt package index and install packages to allow apt to use a repository over HTTPS:

sudo apt-get update

sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

Add Docker’s official GPG key:

sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

Use the following command to set up the repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Manage Docker as a non-root user

Create the docker group.

sudo groupadd docker

Add jenkins user to the docker group and switch user to jenkins to run docker commands.

sudo usermod -aG docker jenkins
su - jenkins

Run the following command to activate the changes to groups:

newgrp docker

Verify that you can run docker commands without sudo.

docker run hello-world


Connect to Remote SSH Agent

From the Jenkins UI (Controller)

sudo -u jenkins ssh -v jenkins@$AGENT_HOSTNAME

Create private and public SSH keys. The following command creates the private key jenkinsAgent_rsa and the public key jenkinsAgent_rsa_pub. It is recommended to store your keys under ~/.ssh/ so we move to that directory before creating the key pair.

 mkdir ~/.ssh; cd ~/.ssh/ && ssh-keygen -t rsa -m PEM -C "Jenkins agent key" -f "jenkinsAgent_rsa"

Add the public SSH key to the list of authorized keys on the agent machine

cat jenkinsAgent_rsa.pub >> ~/.ssh/authorized_keys

Ensure that the permissions of the ~/.ssh directory is secure, as most ssh daemons will refuse to use keys that have file permissions that are considered insecure:

chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys ~/.ssh/jenkinsAgent_rsa

Copy the private SSH key (~/.ssh/jenkinsAgent_rsa) from the agent machine to your OS clipboard

cat jenkinsAgent_rsa

Now you can add the Agent on the Jenkins UI (Controller). Navigate to the Jenkins Controller dashboard and click on the Manage Jenkins tab then click on Nodes and inside nodes select Built-in Node then select Configure tab from the left-hand side menu. In the Number of Executors tab delete 2 and enter 0 then click on the save button.

We have disabled any executors from running on our Jenkins controller node.

Select + New Node the button on the top right corner

Create a credential for the Jenkins agent to navigate to Manage Jenkins and select Credentials select SSH Username with private key as kind paste the private key that we generated on the Jenkins agent node and give it an ID and name.

Now From the dashboard select Set up an agent and fill in the following details. Change the Host IP with your own Jenkins Agent VM IP. Click on the save button

Now let's create a test pipeline job to test that Jenkins agent node can execute jobs.

From Jenkins dashboard select Create a Job option.

As we have configured our Jenkins agent to execute all the jobs this pipeline should execute on the agent node. Select save and Build Now.

Once tested delete the test pipeline.

As web application is developed using Java and Spring Boot framework we need certain plugins that will be helpful for us like Maven to build applications.

Also, search for Adoptium plugin and click on the Install button to Install the plugins.

Building Jenkins Pipeline

The Jenkins pipeline is broken down into stages and each set of stages has steps into it which makes it a multi-stage pipeline. In our pipeline, we have an agent block that specifies on which agent the job should be executed. Using the label we can refer the agent in the pipeline. We also have a tools block in the pipeline that specifies which tools we will be using in our pipeline. Using the label we can refer to the tools that are installed and configured on the Jenkins controller node.

We should always start with a clean workspace as a best practice. The third block we have in the pipeline block is the stages block contains multiple stages and every stage can have multiple steps inside it. We can use the in-built function called as cleanWs() it will clean our workspace

Configure Tools for Jenkins Pipeline

Head over to the Jenkins Controller node's dashboard and select Manage Jenkins then select Tools tab and configure JDK and Maven.

Create Credentials for GitHub

As our source repository is a private GitHub repository we will require a credential to checkout the source code from the Jenkins pipeline. Navigate to the Jenkins controller node's dashboard and go to Manage Jenkins then select the Credentials tab

Fill in the required details, and enter the PAT (Personal Access Token) in the password field, which you need to create on GitHub.

Now your global credentials dashboard should look like this

Now let's start creating an actual pipeline.

We will build the pipeline in a file named Jenkinsfile from the local IDE and then commit the changes to the GitHub repository.

Let's test the pipeline by building it from the Jenkins dashboard.

Adding Build and Test Stages

Install Sonarqube in Ubuntu Linux

SonarQube is an open-source platform developed by SonarSource for continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs and code smells in 29 programming languages.

Prerequisites

  • Virtual Machine running Ubuntu 22.04 or newer

Update Package Repository and Upgrade Packages

sudo apt update
sudo apt upgrade

PostgreSQL

Add PostgresSQL repository

sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget -qO- https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo tee /etc/apt/trusted.gpg.d/pgdg.asc &>/dev/null

Install PostgreSQL

sudo apt update
sudo apt-get -y install postgresql postgresql-contrib
sudo systemctl enable postgresql

Create a Database for Sonarqube

Set password for Postgres user

sudo passwd postgres

Change to the Postgres user

su - postgres

Create database user postgres

createuser sonar

Set password and grant privileges

psql 
ALTER USER sonar WITH ENCRYPTED password 'sonar';
CREATE DATABASE sonarqube OWNER sonar;
grant all privileges on DATABASE sonarqube to sonar;
\q
exit

Adoptium Java 17

Switch to the root user

sudo bash

Add Adoptium repository

wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | tee /etc/apt/keyrings/adoptium.asc
echo "deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | tee /etc/apt/sources.list.d/adoptium.list

Install Java 17

apt update
apt install temurin-17-jdk
update-alternatives --config java
/usr/bin/java --version
exit

Linux Kernel Tuning for SonarQube Node

Increase Limits

sudo vim /etc/security/limits.conf

Paste the below values at the bottom of the file

sonarqube   -   nofile   65536
sonarqube   -   nproc    4096

Increase Mapped Memory Regions

sudo vim /etc/sysctl.conf

Paste the below values at the bottom of the file

vm.max_map_count = 262144

Reboot System

sudo reboot

Install Sonarqube

Download and Extract

sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.9.0.65466.zip
sudo apt install unzip
sudo unzip sonarqube-9.9.0.65466.zip -d /opt
sudo mv /opt/sonarqube-9.9.0.65466 /opt/sonarqube

Create a user and set permissions

sudo groupadd sonar
sudo useradd -c "user to run SonarQube" -d /opt/sonarqube -g sonar sonar
sudo chown sonar:sonar /opt/sonarqube -R

Update Sonarqube properties with DB credentials

sudo vim /opt/sonarqube/conf/sonar.properties

Find and replace the below values, you might need to add the sonar.jdbc.url

sonar.jdbc.username=sonar
sonar.jdbc.password=sonar
sonar.jdbc.url=jdbc:postgresql://localhost:5432/sonarqube

Create service for Sonarqube

sudo vim /etc/systemd/system/sonar.service

Paste the below into the file

[Unit]
Description=SonarQube service
After=syslog.target network.target

[Service]
Type=forking

ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop

User=sonar
Group=sonar
Restart=always

LimitNOFILE=65536
LimitNPROC=4096

[Install]
WantedBy=multi-user.target

Start Sonarqube and Enable service

sudo systemctl start sonar
sudo systemctl enable sonar
sudo systemctl status sonar

Watch log files and monitor for startup

sudo tail -f /opt/sonarqube/logs/sonar.log

Access the Sonarqube UI

http://<IP>:9000

Optional Reverse Proxy and TLS Configuration for SonarQube Node

Installing Nginx

sudo apt install nginx
vi /etc/nginx/sites-available/sonarqube.conf

Paste the contents below and be sure to update the domain name

server {

    listen 80;
    server_name dev.sonarqube.iseasy.tw;
    access_log /var/log/nginx/sonar.access.log;
    error_log /var/log/nginx/sonar.error.log;
    proxy_buffers 16 64k;
    proxy_buffer_size 128k;

    location / {
        proxy_pass http://127.0.0.1:9000;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto http;
    }
}

Next, activate the server block configuration 'sonarqube.conf' by creating a symlink of that file to the '/etc/nginx/sites-enabled' directory. Then, verify your Nginx configuration files.

sudo ln -s /etc/nginx/sites-available/sonarqube.conf /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx

Next, you can follow the Steps for installing the Certbot to obtain an SSL certificate if we want to use Webhook.

Installing Certbot

The first step to using Let’s Encrypt to obtain an SSL certificate is to install the Certbot software on your server.

sudo apt install certbot python3-certbot-nginx

Obtaining an SSL Certificate

Certbot provides a variety of ways to obtain SSL certificates through plugins. The Nginx plugin will reconfigure Nginx and reload the configuration whenever necessary. To use this plugin, type the following:

sudo certbot --nginx -d dev.sonarqube.iseasy.tw

If that’s successful, certbot will ask how you’d like to configure your HTTPS settings.

Select your choice then hit ENTER. The configuration will be updated, and Nginx will reload to pick up the new settings. certbot will wrap up with a message telling you the process was successful and where your certificates are stored.

Nginx should now be serving your domain name. You can test this by navigating to your_domain

Reset the password for SonarQube. The default username and password is admin

There are a couple of things that we need to set up for Jenkins to be able to send the application code to Sonarqube for analysis. We need to set up an API key on SonarQube. Navigate to the Admin page select the Security tab and generate a token. This token will be used by the Jenkins Pipeline to analyze the code.

Now we need to create a credential in Jenkins using which Jenkins can communicate with the SonarQube over the internet securely.

Now we need to install a couple of plugins on Jenkins so that the application can send its code over for analysis. We will install SonarScanner, Sonar Quality Gates and Quality Gates plugins. Quality Gates plugins will allow us to pause or block the build if it doesn't pass the analysis.

Now we need to configure Jenkins to communicate with the SonarQube server.

Head over to Configure System and search for "sonar" to find "SonarQube servers" and add the following configurations. If don't have the DNS configured for the SonarQube server then you can use the http://<Server_IP>:9000 in the server URL.

Now go to the tools section in the Manage Jenkins section and configure SonarQube

SonarQube Stage in Jenkins Pipeline

Add a Quality Gate

Set up a webhook on SonarQube to respond to Jenkins. Go to the Administration section then select the Configuration tab and select the Webhook option from the drop-down menu.

Docker Build and Push Stage

To implement this stage we need to install some docker plugins on Jenkins.

We need to define some environment variables in our Jenkins Pipeline that will help us build Docker containers. These are some best practices that most of organizations follow to have an appropriate way to version and tag our application.

We also need to create a credential in Jenkins to access Dockehub. So that the image can be pushed to Dockerhub from Jenkins automatically after it is built.

Before that let's also create an access token on the Docker hub with read and write access.

The DOCKER_PASSWORD environment variable value and the credential ID on Jenkins of the docker hub should be equal.

Add Trivy Image Scan Stage

As there is no official trivy plugin available on Jenkins we manually have to install trivy on Jenkins agent node. Run the below commands on the Jenkins agent node to install trivy.

sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy -y

On the Jenkins agent node, a trivy-report.text was created.

ArgoCD Installation

Argo CD is a declarative continuous delivery tool for Kubernetes applications. It uses the GitOps style to create and manage Kubernetes clusters. When any changes are made to the application configuration in Git, Argo CD will compare it with the configurations of the running application and notify users to bring the desired and live state into sync.

Prerequisites

  • Virtual Machine running Ubuntu 22.04 or newer

Launch two new Amazon EC2 instances one for Argo CD node and another for Kubernetes Node. We will have a single node for the K8s cluster as it's an small-level application and for demonstration purposes only. Run the below commands on both the VMs.

Update Package Repository and Upgrade Packages

sudo apt update
sudo apt -y upgrade

Create Kubernetes Cluster

We will be using k3s, a lightweight Kubernetes cluster. We are also disabling Traefik, which is an ingress controller, because we will use Nginx for our setup.

sudo bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - --disable traefik
exit 
mkdir .kube
cd .kube
sudo cp /etc/rancher/k3s/k3s.yaml ./config
sudo chown ubuntu:ubuntu config
chmod 400 config
export KUBECONFIG=~/.kube/config

ArgoCD Node details

Kubernetes Node details

Install ArgoCD

Run the below commands on only the Argo CD node.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Change Service to NodePort

Edit the service can change the service type from ClusterIP to NodePort

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'

Fetch Password

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Copy the password and save it somewhere; it will be used in the next few steps.

Optional (Enable TLS w/Ingress)

If you want to enable access from the internet or private network you can follow the instructions below to install and configure an ingress-controller with lets-encrypt.

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.11.0 \
  --set installCRDs=true

Create Cluster Issuer for Lets Encrypt vim letsencrypt-product.yaml and paste the below contents and adjust your email address.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: jayeshrajput.in@gmail.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: nginx
kubectl apply -f letsencrypt-product.yaml

Deploy nginx-ingress controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.0/deploy/static/provider/cloud/deploy.yaml

Create an ingress for ArgoCD vim ingress.yaml and paste the below contents adjust the domain name

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd-server-ingress
  namespace: argocd
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  ingressClassName: nginx  # Updated to use the new field
  rules:
  - host: dev.argocd.iseasy.tw
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argocd-server
            port:
              name: https
  tls:
  - hosts:
    - dev.argocd.iseasy.tw
    secretName: argocd-secret # do not change, this is provided by Argo CD
kubectl apply -f ingress.yaml

Now we can access the ArgoCD at dev.argocd.iseasy.tw

Now we have a blank ArgoCD without any applications or git repositories set up yet. The first thing we need to do is to add the K8s cluster and need to copy the Kubeconfig file and use the context to register with argocd.

Copy the config file from the .kube directory of the K8s cluster node and create a new file named app-cluster.yaml file in the ArgoCD node and paste the content into it. Change the server IP from 127.0.0.1 to the private IP address of the Kubernetes cluster node.

Then execute the following command :

export KUBECONFIG=~/.kube/app-cluster.yaml

To verify the change run the below command:

kubectl get nodes

app-cluster is the name of our Kubernetes cluster.

Install ArgoCD command line tool

curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64

Login to ArgoCD from the CLI

argocd login dev.argocd.iseasy.tw

Now create a cluster using

argocd cluster add default --name app-cluster

Now app-cluster has been added

We need to create a new GitHub repository to host the Kubernetes manifest of deployment and service Kubernetes objects.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitops-cd-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: gitops-cd-pipeline
  template:  
    metadata:
    labels:
      apps: gitops-cd-pipeline
    spec:
      containers:
        - name: java-web-application
          image: jayeshrajput/gitops-cicd-pipeline:1.0.0-11
          resources:
            limits:
              memory: "256Mi"
              cpu: "500m"
          ports:
            - containerPort: 8080
apiVersion: v1
kind: Service
metadata:
  name: gitops-cd-service
spec:
  type: NodePort
  selector:
    app: gitops-cd-pipeline
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

Add CD Repository on ArgoCD

Create an Application on ArgoCD

Add Stage in Jenkins Pipeline to Automatically Update the Image Tag

Jenkins can trigger another build for a pipeline by passing parameters. We need to add a stage in the first Jenkins pipeline script that will trigger the gitops-cd-pipeline job.

Now create a new Jenkins pipeline/job named "gitops-cd-pipeline" on Jenkins that will update the docker image tag in the deployment.yaml file of the newly created GitHub repository.

Now let's create a Jenkins file for the gitops-cd-pipeline which we just configured on Jenkins.

Also, Add the "Trigger the CD Pipeline" stage in the first Jenkinsfile for the pipeline "GitOps-CICD-Pipeline"

Also, create a new Jenkinsfie in the "gitops-cd-pipeline" repository. This pipeline will modify the deployment.yaml file and that modification will trigger the ArgoCD operator which is in sync with this CD repository and the new image will be deployed on the Kubernetes Cluster automatically by ArgoCD.

pipeline{
    agent{
        label "jenkins-agent"
    }
    environment{
        APP_NAME="gitops-cicd-pipeline"
    }
    stages{
        stage("Cleanup Workspace"){
            steps{
                cleanWs()
            }
        }

        stage("Checkout From SCM"){
            steps{
                git branch: 'main', credentialsId: 'github', url: 'https://github.com/jayeshrajputtech/gitops-cd-pipeline'
            }
        }

        stage("Update the Deployment Tags"){
            steps{
                sh """
                    cat deployment.yaml
                    sed -i 's/${APP_NAME}.*/${APP_NAME}:${IMAGE_TAG}/g' deployment.yaml
                    cat deployment.yaml
                """
            }
        }
        stage("Push the Modified Deployment file to GitHub"){
            steps{
                sh """
                git config --global user.name "Jayesh"
                git config --global user.email "jayeshrajput.tech@gmail.com"
                git add deployment.yaml
                git commit -m "Updated Deployment Manifest"
                """
                withCredentials([gitUsernamePassword(credentialsId:'github', gitToolName:'Default')]){
                    sh "git push https://github.com/jayeshrajputtech/gitops-cd-pipeline main"
                }
            }
        }
    }
    post{
        always{
            script{
                def status = currentBuild.result ?: 'UNKNOWN'
                def color
                switch (status){
                    case 'SUCCESS':
                        color = 'good'
                        break
                    case 'FAILURE':
                        color = 'danger'
                        break
                    default:
                        color = 'warning'    
                }
                slackSend(channel: '#operations', message: "Update Deployment ${status.toLowerCase()} for ${env.JOB_NAME} (${env.BUILD_NUMBER}) - ${env.BUILD_URL}", iconEmoji: ':jenkins:', color:color)
            }
        }
    }
}

Below is the final Jenkinsfile for the application repository.

pipeline{
    agent{
        label "jenkins-agent"
    }
    tools{
        jdk 'Java17'
        maven 'Maven3'
    }
    environment{
        APP_NAME = "gitops-cicd-pipeline"
        RELEASE = "1.0.0"
        DOCKER_USERNAME = "jayeshrajput"
        DOCKER_PASSWORD = 'dockerhub-pass'
        IMAGE_NAME = "${DOCKER_USERNAME}" + "/" + "${APP_NAME}"
        IMAGE_TAG = "${RELEASE}-${BUILD_NUMBER}"
    }
    stages{
        stage("Cleanup Workspace"){
            steps{
                cleanWs()
            }
        }
        stage("Checkout From SCM"){
            steps{
                git branch: 'main', credentialsId: 'github', url: 'https://github.com/jayeshrajputtech/GitOps-CICD-Pipeline'
            }
        }
        stage("Build Application"){
            steps{
                sh "mvn clean package"
            }
        }
        stage("Test Application"){
            steps{
                sh "mvn test"
            }
        }
        stage("SonarQube Analysis"){
            steps{
                script{
                    withSonarQubeEnv(credentialsId: 'jenkins-sonarqube-token'){
                    sh "mvn sonar:sonar"
                    }
                }    
            }
        }
        stage("Quality Gate"){
            steps{
                script{
                    waitForQualityGate abortPipeline: false, credentialsId: 'jenkins-sonarqube-token'
                }
            }
        }
        stage("Docker Image Build and Push"){
            steps{
               script{
                docker.withRegistry('',DOCKER_PASSWORD){
                    docker_image = docker.build "${IMAGE_NAME}"
                }
                docker.withRegistry('',DOCKER_PASSWORD){
                    docker_image.push("${IMAGE_TAG}")
                    docker_image.push('latest')
                }
               } 
            }
        }
        stage("Trivy Artifact Scan"){
            steps {
                script{
                    sh ('docker run -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image ${IMAGE_NAME}:${IMAGE_TAG} --no-progress --scanners vuln  --exit-code 0 --severity HIGH,CRITICAL --format table > trivy-report.txt')
                }
            }
        }
        stage("Trigger the CD Pipeline"){
            steps{
              build job: 'gitops-cd-pipeline', parameters: [string(name: 'IMAGE_TAG', value: IMAGE_TAG)], wait: false
            }
        }
    }
    post{
        always{
            script{
                def status = currentBuild.result ?: 'UNKNOWN'
                def color
                switch (status){
                    case 'SUCCESS':
                        color = 'good'
                        break
                    case 'FAILURE':
                        color = 'danger'
                        break
                    default:
                        color = 'warning'    
                }
                slackSend(channel: '#operations', message: "Update Deployment ${status.toLowerCase()} for ${env.JOB_NAME} (${env.BUILD_NUMBER}) - ${env.BUILD_URL}", iconEmoji: ':jenkins:', color:color)
            }
        }
    }
}

Let's test the pipelines by building the first Jenkins pipeline which will trigger the CD pipeline.

As we can view the Scheduling project: gitops-cd-pipeline. It means the last stage of our first Jenkins pipeline has triggered the second Jenkins pipeline that will do the CD.

CD Pipeline Stages

The image tag is updated by the sed command.

The changes are pushed to the "gitops-cd-pipeline" named GitHub repository.

The changes in the GitHub repository gets automatically detected by the ArgoCD operator and the deployment and pods are updated with the new docker image tag.

To test the CI and CD Jenkins pipelines I will make some visible changes in the application so that we can understand the overall flow in a better way.

This is the current version of the application, accessible from the Kubernetes Node's public IP on port number 32221.

I will update the heading from "GitOps CI/CD Pipeline" to "GitOps CI/CD Pipeline Updated" and push the changes to the application GitHub repository. Then I have to manually trigger the CI pipeline on Jenkins. Our current build number of the CI pipeline is 23 the Image tag will be updated with the build number 24.

The image is updated and pushed to the Docker Hub.

CD pipeline is successfully triggered from the CI pipeline

The image tag is updated and pushed to the CD GitHub repository.

ArgoCD detected the changes and deployed the changes

Now let's visit the updated application version. We should able to see the word "Updated" in the heading.

GitHub hook trigger for GITScm polling

Let's also implement the GitHub webhook trigger to automatically start the build on the CI pipeline whenever a push event occurs on the CI GitHub repository.

Navigate to the GitHub repository on which we want Jenkins to poll or watch. Go the settings tab and select the Webhooks option. Select add webhook option on the top right corner.

In the payload URL option insert the Jenkins controller node URL and append the github-webhook/ to the base Jenkins URL. In our case it is "https://dev.ui.jenkins.iseasy.tw/github-webhook/"

I changed the heading and added "v1" to test the GitHub webhook trigger and the build is triggered on Jenkins by the push event.

CD pipeline is also triggered from the CI pipeline

ArgoCD successfully updated the deployment and pods

Changes are successfully deployed and visible to the user.

Jenkins Slack Integration

Let's now set up slack notification whenever a Jenkins Job is executed. Install the plugin named "Slack Notofication" on Jenkins. Make sure you have a Slack Workspace and Channel created.

Create a credential on Jenkins to access Slack workspace and channel securely. You can find the slack token on the slack app directory for Jenkins CI app.

Navigate to the manage Jenkins section and select the System tab go the the Slack section and the required details.

You should be able to see a test Jenkins message on the channel.

Conclusion

In this comprehensive guide, we have successfully set up a robust CI/CD pipeline using Jenkins and GitOps principles. By integrating Jenkins with various tools like Maven, SonarQube, Docker, Trivy, and ArgoCD, we have ensured secure builds, high-quality code, and seamless deployments. This setup not only automates the entire process from code commit to deployment but also incorporates essential security checks and quality gates, making it a reliable and efficient solution for modern software development. With this pipeline, developers can focus more on coding and less on the intricacies of deployment, knowing that their applications will be built, tested, and deployed in a consistent and secure manner.

About the Author

Jayesh is a senior college student passionate about guiding newcomers in Cloud and DevOps. He is determined to help others learn new cloud-native tools and technologies while advocating for inclusivity in the tech industry. Connect with Jayesh on LinkedIn

Â