PROJECT - DevOps.AI
- Nishant Nath
- Sep 27, 2023
- 20 min read
Updated: Aug 3, 2024
CI/CD pipeline for deploying services to both development (DEV) environment and a production (PROD) environment in a Kubernetes cluster.

Project Profile: Project #1
Project Name: Devops.AI Role: DevOps Engineer Date: 25th July 2023 - Till Date Environment : GItLab, BitBucket, Docker, Jenkins, Terraform, Kubernetes, AWS
Roles and Responsibilities:
Provisioned the K8s cluster on AWS account t-cluster-1 using Terraform.
Dockerizing the Services ( Frontend / Backend / Algo )
Heml chart in GitLab and Jenkins will checkout that repo and then apply the helm chart so helm chart which can be used to deploy the frontend & backend service to K8S t-cluster-1.
Services repositories ( Frontend / Backend / Algo ) are in BitBucket
Helm chart performs the deployment for respective services.
Jenkins needs to checkout 2 repos in the end to end pipeline (app and helm chart repo). So we were using multi branch pipeline in order to satisfy this situation.
For the accuracy of deployment I added a status stage post Jenkinsfile for DEV and PROD deployment, the task of this stage is to monitor the Kubernetes deployment
Integrate Jenkins slack notification to send notification on Build start - Both frontend and Backend with the name of the service and link to the build
Created a Jenkins job ( for both user tigger and automation trigger) to perform the cluster node auto-scaling via CronJob.
Project Overview:

In this diagram:
Terraform is used to provision the infrastructure on AWS.
AWS represents the cloud infrastructure where our Kubernetes cluster is deployed.
Kubernetes is where our applications and services will run.
Docker is used to containerize our services, making them portable and easy to deploy.
Services Repositories and Helm Chart represents the source code repositories for our services and the Helm charts used for deploying those services onto Kubernetes.
GitLab and Bitbucket are the platforms where we manage our source code repositories.
Jenkins is used for automation. It can pull code from our repositories, build Docker containers, deploy them to Kubernetes using Helm charts, and test the deployment end-to-end.
Provisioning Cluster via Terraform:
1. main.tf:
This Terraform configuration creates an EKS cluster and configures a managed node group along with the cluster autoscaler for dynamic node scaling based on load.
It's important to note that the success of this Terraform configuration depends on having the necessary AWS credentials and IAM roles configured.
# AWS EKS cluster provision
module "eks" {
// module "eks": This line defines a Terraform module named "eks". Modules are reusable configurations that can be used to provision infrastructure resources.
source = "terraform-aws-modules/eks/aws"
version = "19.16.0"
//source: Specifies the source of the module. In this case, it's
using a module from the "terraform-aws-modules" organization on
the Terraform Registry with version "19.16.0". This module
simplifies the provisioning of an EKS cluster on AWS.
cluster_name = var.cluster_name
cluster_version = "1.27"
// cluster_name: Sets the name for your EKS cluster. The value
is taken from a variable called var.cluster_name.
// cluster_version: Specifies the Kubernetes version for the
EKS cluster. Here, it's set to "1.27".
cluster_endpoint_public_access = true
// cluster_endpoint_public_access: This boolean parameter
determines whether the Kubernetes API server should be
accessible from the public internet. It's set to true, which
means the API server will be publicly accessible.
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
// vpc_id: Specifies the ID of the VPC (Virtual Private Cloud)
where the EKS cluster should be deployed. It's using an output
from another module called module.vpc.
// subnet_ids: Defines the subnet IDs where the EKS worker nodes should be placed. It uses the private subnets from the same module.vpc.
eks_managed_node_group_defaults = {
disk_size = 10
}
// eks_managed_node_group_defaults: This block defines default
configurations for managed node groups that will be created for
the EKS cluster. It specifies a default disk size of 10 GB.
eks_managed_node_groups = {
general = {
name = "${var.cluster_name}-node-group"
desired_size = 1
min_size = 1
max_size = 2
instance_types = ["t2.medium"]
capacity_type = "ON_DEMAND"
}
}
}
// eks_managed_node_groups: This block specifies the managed
node group configuration. It's named "general" and defines
parameters such as the desired number of nodes (1), minimum and
maximum size (1 and 2), instance types (t2.medium), and
capacity type (ON_DEMAND).
# To deploy cluster autoscaler which scales the nodes based on the load using ASG.
module "eks-cluster-autoscaler" {
source = "lablabs/eks-cluster-autoscaler/aws"
version = "2.1.0"
cluster_identity_oidc_issuer= module.eks.cluster_oidc_issuer_url
cluster_identity_oidc_issuer_arn = module.eks.oidc_provider_arn
cluster_name = var.cluster_name
}
// module "eks-cluster-autoscaler": Similar to the first
module, this line defines another module named "eks-cluster-
autoscaler". It's used for deploying the cluster autoscaler for
scaling nodes based on load.
// source: Specifies the source for the cluster autoscaler
module. It's coming from "lablabs/eks-cluster-autoscaler/aws"
with version "2.1.0".
// cluster_identity_oidc_issuer and
cluster_identity_oidc_issuer_arn: These parameters configure
the OIDC (OpenID Connect) issuer and ARN (Amazon Resource Name)
for the EKS cluster. They reference outputs from the first EKS
module (module.eks).
2. variables.tf:
The variables.tf file defines variables that can be used throughout our Terraform configuration to parameterize and customize the provisioning process.
These variables allows us to customize our Terraform configuration by providing values specific to your AWS environment, such as the region, availability zones, AWS user profile name, and the name we want to assign to our EKS cluster.
By declaring these variables, we can make our Terraform configuration more flexible and reusable across different environments or projects.
When we run terraform apply, we would provide values for these variables either directly in the command line or through a separate variable file.
# AWS variables
variable "aws_region" {
description = "AWS region"
type = string
}
variable "availability_zone" {
description = "AWS availability zones"
type = list(string)
}
variable "profile_name" {
description = "AWS user profile name"
type = string
}
variable "cluster_name" {
description = "AWS kubernetes cluster name"
type = string
}3. provider.tf:
The provider.tf file in Terraform is used to configure the cloud provider and specify various settings related to that provider. In this case, it's configuring the AWS provider.
Overall, this file sets up the AWS provider for our Terraform configuration, allowing us to provision AWS resources in the specified region and using the specified AWS profile for authentication and authorization.
terraform {
// terraform - This block specifies Terraform-level settings.
required_providers {
// required_providers: Specifies the required providers for this
configuration. In this case, it's requiring the AWS provider.
aws = {
source = "hashicorp/aws"
version = ">= 3.20.0"
}
}
// aws: The name of the provider
// source: Specifies where to download the provider from. Here,
it's using the official HashiCorp AWS provider.
// version: Specifies the minimum version of the AWS provider
required.
required_version = ">= 0.14"
}
provider "aws" {
region = var.aws_region
#shared_credentials_file = "/home/ubuntu/.aws/credentials"
profile = var.profile_name
}4. vpc.tf:
The vpc.tf file is using a Terraform module to create an AWS Virtual Private Cloud (VPC) and related resources.
Terraform configuration file is responsible for creating an AWS VPC with public and private subnets across multiple availability zones, setting up NAT gateways, and enabling DNS support.
The VPC is given a unique name based on the var.cluster_name variable, and it's tagged for identification. This VPC will likely serve as the network infrastructure for our EKS cluster.
# AWS VPC
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.0.0"
// source: Specifies the source of the module. In this case,
it's using a module from the Terraform Registry,terraform-aws-
modules/vpc/aws module to create the VPC and related
networking resources.
name = "vpc-${var.cluster_name}"
cidr = "10.0.0.0/16"
// cidr: Specifies the IP address range for the VPC. Here, it's set to 10.0.0.0/16
azs = var.availability_zone
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Environment = "staging"
}
}5. outputs.tf
The outputs.tf file is used to define outputs that can be queried after running our Terraform configuration.
This output variable allows us to access the name of our EKS cluster after the Terraform configuration is applied.
We can use this output in subsequent Terraform configurations or scripts, or simply query it using the terraform output command to get the cluster name.
output "kubernetes_cluster_name" {
value = var.cluster_name
description = "EKS Cluster Name"
}
// "output "kubernetes_cluster_name" - This block defines an output variable named kubernetes_cluster_name
// value: Specifies the value that this output variable will hold. In this case, it's set to 6. terraform.tfvars:
The terraform.tfvars file is used to define values for variables used in our Terraform configuration.
# AWS secret variables
profile_name = "your profile name"
// profile_name - This variable is set to "your_IAM". It represents the AWS IAM user profile name that Terraform will use to authenticate with AWS when creating resources. The IAM user associated with this profile should have the necessary permissions to create and manage AWS resources as defined in our Terraform configuration.
availability_zone = ["ap-south-1a","ap-south-1b"]
// availability_zone - This variable is set to a list of two availability zone identifiers: ["ap-south-1a", "ap-south-1b"]. Availability zones are distinct data centers within an AWS region. This list specifies the availability zones that our resources, such as EC2 instances or EKS nodes, can be distributed across. It's a good practice to distribute resources across multiple availability zones for high availability.
aws_region = "ap-south-1"
// aws_region - This variable is set to "ap-south-1", which represents the AWS region where our resources will be provisioned. In this case, it's the Asia Pacific (Mumbai) region.7. DevOps.AI infra_commission.sh:
This script is a convenient way to automate the provisioning of an AWS EKS cluster using Terraform.
It initializes Terraform, validates our configuration, generates a plan based on the cluster name provided as an argument, and then applies that plan to create the EKS cluster and associated resources.
#!/bin/bash
# This script takes 1 parameters Cluster_name
cd $BASE_PROJECT_PATH
rm -rf .terraform/terraform.tfstate
terraform init
terraform validate
terraform plan -var "cluster_name=$1" -out=".AWS.plan"
terraform apply ".AWS.plan"
// #!/bin/bash: This is called a shebang and indicates that the script should be interpreted and executed using the Bash shell.
// cd $BASE_PROJECT_PATH: It changes the current working directory to the value stored in the env variable $BASE_PROJECT_PATH. This variable should be defined in our root of the project env --> /keys/env.key Changing the directory helps ensure that Terraform operates within the correct project directory.
// rm -rf .terraform/terraform.tfstate: This line removes the Terraform state file (terraform.tfstate) and the .terraform directory, which stores Terraform-related data for our project. Removing the state file is often done before initializing Terraform to start with a clean slate.
// terraform init: This command initializes the Terraform project in the current directory. It downloads any necessary plugins or modules defined in our Terraform configuration files.
// terraform validate: This command checks the validity of our Terraform configuration files, ensuring that they follow the correct syntax and have no obvious errors.
// terraform plan -var"cluster_name=$1"-out=".AWS.plan" - terraform plan: This command generates an execution plan for Terraform to apply. It calculates what changes need to be made to achieve the desired state specified in our Terraform configuration files.
-var "cluster_name=$1": This flag sets a variable named cluster_name with the value passed as the first argument to the script.
-out=".AWS.plan": This flag specifies the filename where the plan should be saved. It's saved as .AWS.plan in the current directory.
// terraform apply ".AWS.plan" - This command applies the previously generated execution plan to create or modify AWS resources. It will use the plan file (.AWS.plan) to perform the actual infrastructure provisioning.After running the above script it should bring up a cluster via terraform

8. DevOps.AI_infra_decommission.sh:
This Bash script is designed to deprovision and destroy an AWS EKS (Amazon Elastic Kubernetes Service) cluster and its associated resources created using Terraform.
This script automates the process of destroying an AWS EKS cluster and its resources created using Terraform. It uses the terraform destroy command with the --auto-approve flag to initiate the destruction, and it cleans up Terraform-related files afterward by removing the .terraform directory.
#!/bin/bash
# This script takes 1 parameter Cluster_name
cd $BASE_PROJECT_PATH
terraform destroy -var "cluster_name=$1" --auto-approve
rm -rf .terraform
// terraform destroy: This Terraform command initiates the destruction of the resources defined in your Terraform configuration files. It essentially reverses the infrastructure provisioning.
// -var "cluster_name=$1": This flag sets a variable named cluster_name with the value passed as the first argument to the script. This variable allows Terraform to identify and destroy the correct resources.
// --auto-approve: This flag automates the approval process for destroying resources, so we don't have to manually confirm the destruction.
// rm -rf .terraform: This line removes the .terraform directory, which stores Terraform-related data for our project. Removing this directory is typically done after destroying resources to clean up Terraform-related files.Helm Charts in GitLab
Helm is a package manager for Kubernetes that simplifies the deployment and management of applications.
Helm charts are a collection of pre-configured Kubernetes resources that can be easily installed and managed on a cluster.
DevOps.AI-helm-chart/
├── Chart.yaml
├── common_values.yaml
├── values-dev.yaml
├── values-prod.yaml
├── templates/
│ ├── frontend | _helpers-frontend.tpl
│ ├── backend | deployment.yaml
│ └── algo |── ingress-controller.yaml
| service.yaml
1. Chart.yaml:
It provides metadata and configuration information about our helm chart, including its name, description, type, and version information.
It helps users understand the purpose and compatibility of the chart.
apiVersion: v2
name: my-app
description: A Helm chart for deploying my-app application to Kubernetes
// apiVersion: v2: Specifies the version of the Helm chart API being used. In this case, it's version 2 of the Helm chart API.
type: application
// type: application: Defines the type of chart. Here, it's an "application" chart, which typically includes templates for deploying applications.
version: 0.1.0
// version: 0.1.0: Specifies the version of the Helm chart. It's important to increment this version whenever you make changes to your chart or its templates.
appVersion: "1.1.0"
// appVersion: "1.1.0": Specifies the version of the application being deployed by this chart.2. common_values.yaml:
It define common configuration settings for our helm chart. It includes settings for frontend, backend, and algo services.
It allows us to define image information, probes (readiness and liveness), Ingress settings, and service details in one place for consistency across services.
frontend:
image:
repository: 927491280662.dkr.ecr.ap-south-1.amazonaws.com/my- app-frontend
pullPolicy: IfNotPresent
readinessProbe:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
path: /
port: 80
initialDelaySeconds: 15
periodSeconds: 20
ingress:
ingressClassName: nginx
path: /
pathType: Prefix
service:
type: ClusterIP
port: 80
targetPort: 80
backend:
image:
repository: 927491280662.dkr.ecr.ap-south-1.amazonaws.com/my-app-backend
pullPolicy: IfNotPresent
readinessProbe:
path: /status
port: 3001
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
path: /status
port: 3001
initialDelaySeconds: 15
periodSeconds: 20
ingress:
ingressClassName: nginx
path: /
pathType: Prefix
service:
type: ClusterIP
port: 3001
targetPort: 3001
algo:
image:
repository: 927491280662.dkr.ecr.ap-south-1.amazonaws.com/my-app-algo
pullPolicy: IfNotPresent
readinessProbe:
path: /status
port: 5300
initialDelaySeconds: 500
periodSeconds: 10
livenessProbe:
path: /status
port: 5300
initialDelaySeconds: 500
periodSeconds: 20
ingress:
ingressClassName: nginx
path: /
pathType: Prefix
service:
type: ClusterIP
port: 5300
targetPort: 53003. values-dev.yaml:
values-dev.yaml file customizes the configuration for the development environment, including the number of replicas, image tags, Ingress settings, and configuration file names for each service.
namespace: dev
frontend:
replicaCount: 1
image:
tag: "1"
ingress:
hosts:
- my-app.frontend.in
tls:
- secretName: my-app-tls-dev
hosts:
- myapp.domain.in
backend:
replicaCount: 1
image:
tag: "1"
ingress:
hosts:
- my-app.domain.in
tls:
- secretName: my-app-tls-dev
hosts:
- my-app.domain.in
configFileName: "files/backend.env.dev.json"
// configFileName: Specifies the name of the configuration file for the backend service, which is "backend.env.dev.json" located in the "files" directory.
algo:
replicaCount: 1
image:
tag: "1"
ingress:
hosts:
- my-app.domain.in
tls:
- secretName: my-app-tls-dev
hosts:
- my-app.domain.in
configFileName: "files/algo.env.dev.json"4. values-prod.yaml
This file specifies configuration settings for the production (prod) environment. Similar to the previous file, it includes settings for frontend, backend, and algo services.
namespace: prod
frontend:
replicaCount: 1
image:
tag: "1"
ingress:
hosts:
- my-app.domain.in
tls:
- secretName: my-app-tls-prod
hosts:
- my-app.domain.in
backend:
replicaCount: 1
image:
tag: "1"
ingress:
hosts:
- my-app.domain.in
tls:
- secretName: my-app-tls-prod
hosts:
- my-app.domain.in
configFileName: "files/backend.env.prod.json"
algo:
replicaCount: 1
image:
tag: "1"
ingress:
hosts:
- my-app.domain.in
tls:
- secretName: my-app-tls-prod
hosts:
- my-app.domain.in
configFileName: "files/backend.env.prod.json"5. templates:
Template files defining below are for frontend service but same can be apply for any services just by changing the variable name from frontend to backend or algo.
a. _helpers-frontend.tpl:
{{/*
frontend fullname
// It's used to add comments within the template files.
*/}}
{{- define "app.frontendFullname" -}}
{{ .Chart.Name }}-frontend
{{- end }}
// this template defines a function app.frontendFullname that returns the full name of the frontend component, which is derived from the Chart name and appended with -frontend. This full name is often used to set the name of Kubernetes resources created for this component.b. deployment.yaml:
The deployment.yaml file describes how the frontend component should be deployed and managed within a Kubernetes cluster.
It specifies the container image, resource labels, replicas, and probes for health checks.
apiVersion: apps/v1
kind: Deployment
// apiVersion and kind: These fields specify the Kubernetes API
version and resource kind. In this case, it's a Deployment
resource in the apps/v1 API version.
metadata:
name: {{ include "app.frontendFullname" . }}
namespace: {{ .Values.namespace }}
labels:
app: my-app-frontend
// metadata: This section provides metadata for the Deployment
resource. It includes the name, which is set to the full name of
the frontend component obtained from the app.frontendFullname
template, and the namespace, which is set to the namespace
specified in the Helm values file (.Values.namespace).
// labels: Labels are key-value pairs that are used to identify
and organize resources. The app: DevOps.AI-frontend label is applied
to this Deployment.
spec:
replicas: {{ .Values.frontend.replicaCount }}
selector:
matchLabels:
app: my-app-frontend
// spec: This section defines the desired state for the
Deployment.
// replicas: The number of desired replicas for the frontend
Deployment is set to the value .Values.frontend.replicaCount.
This value allows us to control how many instances of the
frontend service should be running.
// selector: This field specifies how the Deployment selects
which Pods to manage. It uses the matchLabels field to select
Pods with the label app: DevOps.AI-frontend.
template:
metadata:
labels:
app: my-app-frontend
// template: This section defines the pod template used to
create new Pods when scaling. It specifies the labels for Pods
created by this Deployment, which include app DevOps.AI-frontend.
spec:
containers:
- name: {{ .Release.Name }}-frontend
image: "{{ .Values.frontend.image.repository }}:{{ .Values.frontend.image.tag }}"
imagePullPolicy: {{ .Values.frontend.image.pullPolicy }}
// containers: This section defines the containers to run
within the Pods.
// name: The name of the container is set to {{ .Release.Name
}}-frontend. The {{ .Release.Name }} variable is replaced with
the Helm release name, which allows each release to have a
unique container name.
// image: This field defines the Docker image to use for the
container. It combines the image repository and tag from the
Helm values file (.Values.frontend.image.repository and .Values.frontend.image.tag) to specify the image.
// imagePullPolicy: This field specifies when to pull the container image. It uses the value from the Helm values file (.Values.frontend.image.pullPolicy)
ports:
- name: http
containerPort: {{ .Values.frontend.service.targetPort }}
protocol: TCP
// ports: This section defines the container ports. In this case, it specifies a single port named "http" with a containerPort that is set to the targetPort specified in the Helm values (.Values.frontend.service.targetPort).
readinessProbe:
httpGet:
path: {{ .Values.frontend.readinessProbe.path }}
port: {{ .Values.frontend.readinessProbe.port }}
initialDelaySeconds: {{ .Values.frontend.readinessProbe.initialDelaySeconds | default "10" }}
periodSeconds: {{ .Values.frontend.readinessProbe.periodSeconds | default "10"}}
livenessProbe:
httpGet:
path: {{ .Values.frontend.livenessProbe.path }}
port: {{ .Values.frontend.livenessProbe.port }}
initialDelaySeconds: {{ .Values.frontend.livenessProbe.initialDelaySeconds | default "15" }}
periodSeconds: {{ .Values.frontend.livenessProbe.periodSeconds | default "20"}}
// readinessProbe and livenessProbe: These sections define the readiness and liveness probes for the container. These probes are used by Kubernetes to determine if the container is ready and healthy. The httpGet field specifies an HTTP GET request to a path and port, and initialDelaySeconds and periodSeconds define the initial delay before the first probe and the frequency of subsequent probes.c. ingress-controller.yaml
ingress-controller.yaml file defines the Ingress resource for the frontend component, including routing rules, TLS settings, and the class of the Ingress controller to use
It's a dynamic configuration based on the values provided in the Helm chart, allowing for flexibility in configuring how external traffic is routed to the frontend service.
{{- $frontend_fullName := include "app.frontendFullname" . -}}
apiVersion: networking.k8s.io/v1
// {{- $frontend_fullName := include "app.frontendFullname" . -}}: This is a Helm template expression. It defines a variable $frontend_fullName using the include function to obtain the full name of the frontend component as defined in the _helpers-frontend.tpl template.
kind: Ingress
metadata:
name: {{ include "app.frontendFullname" . }}
namespace: {{ .Values.namespace }}
// apiVersion and kind: These fields specify the Kubernetes API version and resource kind. In this case, it's an Ingress resource in the networking.k8s.io/v1 API version.
// metadata: This section provides metadata for the Ingress resource, including the name and namespace. The name is set to the full name of the frontend component obtained from the $frontend_fullName variable, and the namespace is set to the namespace specified in the Helm values file (.Values.namespace).
spec:
ingressClassName: {{ .Values.frontend.ingress.ingressClassName }}
{{- with .Values.frontend.ingress.tls }}
tls:
{{- tpl (toYaml .) $ | nindent 4 }}
{{- end }}
// ingressClassName: This field specifies the class of the
Ingress controller to use. It is set to the value specified in
the Helm values file (.Values.frontend.ingress.ingressClassName).
// tls: This section is conditional and only included if TLS settings are defined in the Helm values file (.Values.frontend.ingress.tls). TLS (Transport Layer Security) is used to secure HTTP traffic.
// tls section: It is constructed using Helm template functions. The toYaml function converts the TLS settings to YAML format, and the tpl function is used to indent the YAML content by four spaces. This is a dynamic way to generate the tls section based on the values provided in the Helm chart.
rules:
{{- range .Values.frontend.ingress.hosts }}
- host: {{ tpl . $ }}
http:
paths:
- path: {{ $.Values.frontend.ingress.path }}
pathType: {{ $.Values.frontend.ingress.pathType }}
backend:
service:
name: {{ $frontend_fullName }}
port:
number: {{ $.Values.frontend.service.port }}
{{- end }}
// rules: This section defines the routing rules for incoming
requests.
// range .Values.frontend.ingress.hosts: This is a loop that
iterates over the list of hostnames specified in the Helm
values file (.Values.frontend.ingress.hosts).
// host: The host field is set to the current hostname being
iterated over.
// http: This section specifies the HTTP routing for the given
hostname.
// paths: This section defines the paths for incoming requests.
It specifies a single path and pathType based on the values in
the Helm chart (.Values.frontend.ingress.path and
.Values.frontend.ingress.pathType).
// backend: This section defines the backend service to route
requests to.
// service: The name of the service to route requests to is set
to the $frontend_fullName variable, which is the full name of
the frontend component.
// port: The port number to use is set to the value specified in the Helm values file (.Values.frontend.service.port).
d. service.yaml:
service.yaml file defines a Kubernetes Service resource for the frontend component, specifying how traffic should be routed to the Pods based on their labels, the type of exposure (e.g., ClusterIP), and the ports to open for incoming traffic.
The configuration is dynamic and depends on the values provided in the Helm chart, allowing flexibility in how the frontend service is exposed and accessed within the cluster
apiVersion: v1
kind: Service
metadata:
name: {{ include "app.frontendFullname" . }}
namespace: {{ .Values.namespace }}
spec:
selector:
app: DevOps.AI-frontend
type: {{ .Values.frontend.service.type }}
// type: The type field determines how the Service is exposed.
The value is obtained from the Helm values file
(.Values.frontend.service.type). It could be one of the
following:
a. ClusterIP: The Service is only accessible from within the
cluster.
b. NodePort: The Service is accessible using a static port on
each node in the cluster.
c. LoadBalancer: The Service is exposed externally using a
cloud provider's load balancer.
ports:
- protocol: {{ .Values.frontend.service.protocol | default "TCP" }}
// protocol: The protocol used for the port, such as TCP or
UDP. The value is obtained from the Helm values file
(.Values.frontend.service.protocol) with a default value of
"TCP" if not provided.
port: {{ .Values.frontend.service.port }}
targetPort: {{ .Values.frontend.service.targetPort }}
// targetPort: The port on the Pods that should receive the
incoming traffic. This is typically set to the same value as
the container's port in the Pod's specification. The value is
obtained from (.Values.frontend.service.targetPort).Jenkinsfile
Jenkinsfile is written in Groovy and is used to define a Jenkins Pipeline for automating the deployment process of a frontend service (likely a web application) to both a development (DEV) and production (PROD) environment.
This Jenkinsfile defines a complete CI/CD pipeline for deploying a frontend service with Docker containers, Kubernetes, and associated environment configurations.
It incorporates best practices such as environment separation (DEV and PROD) and manual approval for production deployments.
/* groovylint-disable DuplicateStringLiteral, LineLength, VariableName, VariableTypeRequired */
/* groovylint-disable-next-line CompileStatic */
DEV_ENV_NAME = 'dev'
PROD_ENV_NAME = 'prod'
CURRENT_HELM_SERVICE_NAME = 'frontend'
BITBUCKET_CRED_ID = 'bitbucket_cred'
GITLAB_CRED_ID = 'devops-toolchain-cred'
// Environment variables DEV_ENV_NAME, PROD_ENV_NAME, and others are defined at the beginning to store configuration values and credentials.
pipeline {
// pipeline block defines the entire Jenkins Pipeline and contains multiple stages.
agent any
environment {
ECR_REPO_NAME = '927491280662.dkr.ecr.ap-south-1.amazonaws.com'
SCRIPT_PATH = './devops-toolchain/my-app_app_config'
HELM_PATH = './devops-toolchain/my-app_config/DevOps.AI_helm_chart'
BACKEND_ENV_FILE_PATH = './my-app-api'
ALGO_ENV_FILE_PATH = './my-app-algo/app'
AWS_CRED = 'Profile'
AWS_REGION = 'ap-south-1'
}
// Environment Block Defines various environment variables
used throughout the pipeline, such as AWS credentials, paths,
and repository URLs.
stages {
// stages block contains individual stages representing different steps in the deployment process.
stage('SCM checkout') {
steps {
script {
slackSend message:"Build started - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|>)"
// checkout gitlab repository
checkout([
$class: 'GitSCM',
branches: [[name: '*/master' ]],
extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: 'devops-toolchain']],
userRemoteConfigs: [[
url: ' < Gitlab repo URL >',
credentialsId: GITLAB_CRED_ID
]]
])
// checkout DevOps.AI-api repository to get backend_configmaps
checkout([$class: 'GitSCM',
branches: [[name: '*/main' ]],
extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: 'DevOps.AI-api']],
userRemoteConfigs: [[
url: 'https://services@bitbucket.org/devops.ai/devops-api.git',
credentialsId: BITBUCKET_CRED_ID
]]
])
// checkout devops.api-algo repository to get backend_configmaps
checkout([$class: 'GitSCM',
branches: [[name: '*/main' ]],
extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: 'devops-algo']],
userRemoteConfigs: [[
url: 'https://services@bitbucket.org/devopsai/devops-algo.git',
credentialsId: BITBUCKET_CRED_ID
]]
])
// It also loads a Groovy script called commonMethods.groovy
commonMethodsCI = load "${WORKSPACE}/devops-toolchain/devops_app_config/commonMethods.groovy"
}
}
}
stage('DEV: Docker Build & Push') {
steps {
script {
// This stage builds a Docker image for the service using a custom method commonMethodsCI.buildImage()
// It tags the image with a version (IMAGE_TAG) and pushes it to an Amazon ECR repository.
IMAGE_TAG = "${BUILD_TIMESTAMP}.${BUILD_NUMBER}.${DEV_ENV_NAME}"
commonMethodsCI.buildImage(CURRENT_HELM_SERVICE_NAME, IMAGE_TAG, DEV_ENV_NAME)
commonMethodsCI.pushtoECR(CURRENT_HELM_SERVICE_NAME, IMAGE_TAG)
}
}
}
stage('DEV: Deploy to cluster') {
steps {
script {
// shell script to read env file and convert it to json and push it to helm chart directory
sh(returnStdout: true, script: "${SCRIPT_PATH}/convert_env_to_json.sh ${BACKEND_ENV_FILE_PATH} .env.dev backend")
sh(returnStdout: true, script: "${SCRIPT_PATH}/convert_env_to_json.sh ${ALGO_ENV_FILE_PATH} .env.dev algo")
commonMethodsCI.deployToCluster(DEV_ENV_NAME, CURRENT_HELM_SERVICE_NAME, IMAGE_TAG)
}
}
}
stage('DEV: Monitor Deployment Status') {
steps {
script {
commonMethodsCI.monitorK8Deployment(DEV_ENV_NAME, CURRENT_HELM_SERVICE_NAME)
}
}
}
stage('PROD: Docker Build & Push') {
steps {
script {
// Create an Approval Button with a timeout of 15minutes.
timeout(time: 15, unit: 'MINUTES') {
input message: 'Do you want to approve the Production deployment?', ok: 'Yes'
}
IMAGE_TAG = "${BUILD_TIMESTAMP}.${BUILD_NUMBER}.${PROD_ENV_NAME}"
commonMethodsCI.buildImage(CURRENT_HELM_SERVICE_NAME, IMAGE_TAG, PROD_ENV_NAME)
commonMethodsCI.pushtoECR(CURRENT_HELM_SERVICE_NAME, IMAGE_TAG)
}
}
}
stage('Deployment to PROD environment') {
steps {
script {
echo 'Initiating the PROD deployment'
// shell script to read env file and convert it to json and push it to helm chart directory
sh(returnStdout: true, script: "${SCRIPT_PATH}/convert_env_to_json.sh ${BACKEND_ENV_FILE_PATH} .env.prod backend")
sh(returnStdout: true, script: "${SCRIPT_PATH}/convert_env_to_json.sh ${ALGO_ENV_FILE_PATH} .env.prod algo")
commonMethodsCI.deployToCluster(PROD_ENV_NAME, CURRENT_HELM_SERVICE_NAME, IMAGE_TAG)
}
}
}
stage('Monitor Deployment Status for PROD') {
steps {
script {
commonMethodsCI.monitorK8Deployment(PROD_ENV_NAME, CURRENT_HELM_SERVICE_NAME)
}
}
}
}
post {
success {
slackSend color: '#36A64F', message: "Deployment of ${env.JOB_NAME} is succeeded! (<${env.BUILD_URL}|>)"
}
failure {
slackSend color: '#FF0000', message: "Deployment of ${env.JOB_NAME} is failed! (<${env.BUILD_URL}|>)"
}
}
}CommonMethods.groovy:
The commonMethods.groovy file is a Groovy script used by the Jenkins pipeline
It contains reusable functions and methods that are used in the pipeline stages to build and deploy Docker images, interact with Amazon Elastic Container Registry (ECR), and monitor Kubernetes deployments.
This commonMethods.groovy script serves as a collection of reusable functions that streamline the CI/CD process defined in the Jenkinsfile
It abstracts away complex logic and makes the Jenkins pipeline more maintainable and readable.
/* groovylint-disable CompileStatic, FactoryMethodName, LineLength, MethodParameterTypeRequired, NoDef, ParameterName, UnusedVariable, VariableTypeRequired */
HELM_CHART_ALIAS = 'devops-app'
ECR_IMAGE_ALIAS = 'devops.ai'
// HELM_CHART_ALIAS & ECR_IMAGE_ALIAS: A constant representing the alias for the Helm chart.
ALL_SERVICES = [
'frontend',
'backend',
'algo',
]
// ALL_SERVICES: A list of service names.
def buildImage(serviceName, TAG, envName) {
ECR_IMAGE_NAME = "${ECR_IMAGE_ALIAS}-${serviceName}"
sh "npm run build:${envName}"
sh "docker build -t ${ECR_IMAGE_NAME}:${TAG} ."
}
// buildImage Function:
Parameters: serviceName, TAG, envName
* This function builds a Docker image for a given service.
* It constructs the image name based on the ECR_IMAGE_ALIAS
and serviceName.
* It runs npm run build with the specified environment name
(envName)
* It builds a Docker image with the specified tag (TAG)
def buildImage2(serviceName, TAG) {
ECR_IMAGE_NAME = "${ECR_IMAGE_ALIAS}-${serviceName}"
sh "docker build -t ${ECR_IMAGE_NAME}:${TAG} ."
}
// Same as above
def pushtoECR(serviceName, TAG) {
ECR_IMAGE_NAME = "${ECR_IMAGE_ALIAS}-${serviceName}"
withAWS(credentials: "${env.AWS_CRED}", region: "${env.AWS_REGION}") {
sh 'aws ecr get-login-password --region ap-south-1 | docker login --username AWS \
--password-stdin 927491280662.dkr.ecr.ap-south-1.amazonaws.com'
sh "docker tag ${ECR_IMAGE_NAME}:${TAG} ${env.ECR_REPO_NAME}/${ECR_IMAGE_NAME}:${TAG}"
sh "docker push ${env.ECR_REPO_NAME}/${ECR_IMAGE_NAME}:${TAG}"
}
}
// pushtoECR Function:
Parameters: serviceName, TAG
This function pushes a Docker image to Amazon ECR.
It constructs the image name based on the ECR_IMAGE_ALIAS and
serviceName.
It uses the AWS CLI to log in to ECR and then tags and pushes
the image.
def getExistingServiceTags(currentServiceName, envName) {
def otherServices = []
ALL_SERVICES.eachWithIndex { serviceName, i ->
if (serviceName != currentServiceName) {
otherServices.push(serviceName)
}
}
def otherServiceTagMap = [:]
otherServices.eachWithIndex { serviceName, i ->
def currentServiceFullName = "${HELM_CHART_ALIAS}-${serviceName}"
existingTag = sh(returnStdout: true, script: "${env.SCRIPT_PATH}/deployment_version.sh ${currentServiceFullName} ${envName}")
otherServiceTagMap[serviceName] = existingTag
}
return otherServiceTagMap
}
// getExistingServiceTags Function:
* Parameters: currentServiceName, envName
* This function retrieves existing service tags for services
other than the current service
* It iterates through ALL_SERVICES and uses the
deployment_version.sh script to get the current tags for
other services.
* It returns a map of service names and their respective tags.
/**
Example:
deployToCluster(dev, frontend, dev.111)
*/
def deployToCluster(envName, serviceName, serviceImageTag) {
withAWS(credentials: "${env.AWS_CRED}", region: "${env.AWS_REGION}") {
def releaseName = "${HELM_CHART_ALIAS}-${envName}"
def otherServiceTagMap = getExistingServiceTags(serviceName, envName)
println(otherServiceTagMap)
def helmCommandString = "helm upgrade --install ${releaseName} ${env.HELM_PATH} \
--values ${env.HELM_PATH}/common_values.yaml --values ${env.HELM_PATH}/values-${envName}.yaml --set \
${serviceName}.image.tag=${serviceImageTag}"
otherServiceTagMap.eachWithIndex { tagMap, i ->
helmCommandString = "${helmCommandString} --set ${tagMap.key}.image.tag=${tagMap.value}"
}
println(helmCommandString)
def helmCommand = helmCommandString.replaceAll('[\\t\\n\\r]+', ' ')
script {
sh "$helmCommand"
echo "Completed the ${envName} deployment"
}
}
}
// deployToCluster Function:
* Parameters: envName, serviceName, serviceImageTag
* This function deploys a service to a Kubernetes cluster
using Helm.
* It constructs the Helm release name based on
HELM_CHART_ALIAS and envName.
* It retrieves existing service tags using
getExistingServiceTags
* It constructs a Helm command to upgrade and install the Helm
chart with specified values and tags.
* It executes the Helm command and logs the completion of the
deployment
/**
Example:
monitorK8Deployment(dev, frontend)
*/
def monitorK8Deployment(envName, serviceName) {
timeout = 5 * 60 // 5 minutes in seconds
startTime = currentBuild.startTimeInMillis
elapsedTime = 0
def currentServiceFullName = "${HELM_CHART_ALIAS}-${serviceName}"
while (elapsedTime < timeout) {
deploymentStatus = sh(script: "kubectl rollout status deployment/${currentServiceFullName} -n \
${envName}", returnStatus: true)
if (deploymentStatus == 0) {
echo "${currentServiceFullName} succeeded"
break
} else {
echo 'Deployment is still in progress. Waiting for 30 seconds...'
sleep(time: 30, unit: 'SECONDS')
elapsedTime = System.currentTimeMillis() - startTime
}
}
// monitorK8Deployment Function:
* Parameters: envName, serviceName
* This function monitors the deployment status of a service
in a Kubernetes cluster
* It uses a timeout of 5 minutes (300 seconds) and checks
the deployment status at regular intervals.
* If the deployment succeeds within the specified timeout,
it prints a success message.
* If the deployment fails or doesn't complete within the
timeout, it terminates the pipeline with an error message.
if (elapsedTime >= timeout) {
error "Deployment didn't succeed within the specified timeout. Terminating pipeline."
}
}
return this
// Return Statement:
The script ends with a return this statement, but it doesn't
affect the pipeline executionNOTE:
As this is an ongoing LIVE project, we continue to enhance the pipeline with the deployment of new features in both DEV and PROD environments, and I will keep the pipeline updated accordingly.
THANK YOU
Nishant




Comments