How Bash Saved the Day: Deploying Helm Charts with a Bash CLI Program
In the world of software development, encountering roadblocks is part of the journey. Recently, I faced a significant challenge while trying to deploy a Helm chart to a Kubernetes cluster using Go. Despite my best efforts and countless hours of troubleshooting, a critical bug in the github.com/mittwald/go-helm-client
package thwarted my attempts. I wrote about that here: Painful Experience with Go and Kubernetes. However, Bash came to the rescue, allowing me to complete the deployment in a time-sensitive project at work. Here’s the story of how a simple Bash CLI Program saved the day.
The Challenge
The task was to automate the deployment of several Helm charts to a Kubernetes cluster. Initially, I opted to use Go for this purpose, leveraging the github.com/mittwald/go-helm-client
package. Unfortunately, a bug in the package prevented the successful deployment of the Helm charts, as detailed in my bug report here. With the project deadline looming, I needed a quick and reliable alternative.
The Solution: Bash CLI Program
Bash scripts are often underestimated, but their simplicity and power can be a lifesaver in critical situations. I wrote a Bash CLI program to handle the Helm deployments and integrated it into my Terraform script using the null function. Here’s the Bash script that made it all possible:
#!/bin/bash
### Pass Flags
#Define the help function
function help(){
echo "Options:";
echo "-r Resource Group Name"
echo "-l Location of Resource Group"
echo "-c Cluster Name"
echo "-e Email for LetsEncrypt"
echo "-d Domain name for Nginx"
exit 1;
}
# Initialize the default values for the variables.
r="rg";
l="location";
c="cluster";
e="email";
d="domainname";
# Define the getopts variables
options=":r:l:c:e:d:h";
# Start the getopts code
while getopts ${options} opt; do
case $opt in
r) # Get the resource group name
rg=${OPTARG}
;;
l) # Get the location
location=${OPTARG}
;;
c) # Get the cluster name
cluster=${OPTARG}
;;
e) # Get the email
email=${OPTARG}
;;
d) # Get the domain name
domainname=${OPTARG}
;;
h) # Execute the help function
help;
;;
\?) # Unrecognized option - show help
echo "Invalid option."
help;
;;
esac
done
# This tells getopts to move on to the next argument.
shift $((OPTIND-1))
# End getopts code
## Connect to Cluster
echo "Connecting to AKS Cluster"
az aks get-credentials --resource-group "$rg" --name "$cluster" --overwrite-existing
## Wait for the Cluster to Come online.
sleep 2m
echo "Waiting For the Cluster to Fully Wake Up. This is so that Helm can install properly and not error. It's a 2-minute wait, go make a tea or coffee."
## Add the Repos
helm repo add --force-update ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm repo add --force-update cert-manager https://charts.jetstack.io
helm repo update
helm repo add --force-update argo https://argoproj.github.io/argo-helm
helm repo update
## Install nginx-ingress
helm install nginx-ingress ingress-nginx/ingress-nginx --force \
--namespace ingress-nginx --create-namespace \
--set controller.replicaCount=2 \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.service.externalTrafficPolicy=Local \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux
## Install Cert Manager
helm install cert-manager cert-manager/cert-manager --force \
--namespace cert-manager --create-namespace \
--version v1.14.5 \
--set nodeSelector."kubernetes\.io/os"=linux \
--set installCRDs=true \
--set 'extraArgs={--dns01-recursive-nameservers=1.1.1.1:53}'
## Install ArgoCD
helm install argocd argo/argo-cd --force \
--namespace argocd --create-namespace \
--version "6.9.2" \
--set installCRDs=true \
-f argocd-values.yaml \
-f deployenvcongifmap.yaml
cat << EOF | kubectl -
port-forward svc/argocd-server 8080:443
EOF
git clone https://github.com/cloudflare/origin-ca-issuer.git
cd origin-ca-issuer
kubectl apply -f deploy/crds
kubectl apply -f deploy/rbac
kubectl apply -f deploy/manifests
kubectl create secret generic jasons-api-key \
-n cert-manager \
--from-literal api-token='API-TOKEN'
sleep 60
cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: lets-encrypt-jasons-cert
namespace: cert-manager
spec:
acme:
email: "$email"
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: lets-encrypt-jasons-cert
solvers:
- dns01:
cloudflare:
email: "$email"
apiTokenSecretRef:
name: jasons-api-key
key: api-token
selector:
dnsZones:
- 'companydomain.com'
EOF
cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-jasons-cert
spec:
secretName: letsencrypt-jasons-cert
issuerRef:
name: lets-encrypt-jasons-cert
kind: ClusterIssuer
commonName: 'testing-jc2.comapnydomain.com'
dnsNames:
- 'testing-jc2.comapnydomain.com'
EOF
cat << EOF | kubectl replace -f -
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: server
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
name: argocd-server
namespace: argocd
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
EOF
cat << EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
cert-manager.io/cluster-issuer: lets-encrypt-jasons-cert
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
nginx.ingress.kubernetes.io/backend-protocol: 'HTTPS'
spec:
rules:
- host: "$domainname"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
name: https
tls:
- hosts:
- "$domainname"
secretName: argocdingress-cert
EOF
echo "Deployment Complete"
Integrating with Terraform
To automate the execution of this Bash CLI program within my Terraform script, I used the null_resource
function. This allowed me to seamlessly call the Bash script and handle the Helm deployments as part of the Terraform workflow. Here’s a snippet of how I integrated the script with Terraform; I actually pass the variable values using a locals file:
resource "null_resource" "aks_configure" {
provisioner "local-exec" {
command = "./setupcluster.sh -r ${local.rg_name} -l ${local.location_for_bash_script} -c ${local.kubernetesclustername} -e ${local.letsencryptemail} -d ${local.fulldnsname}"
interpreter = ["bash", "-c" ]
}
depends_on = [azurerm_kubernetes_cluster.new_prod_kubernetes, azurerm_resource_group.new_prod_rg]
}
The Benefits of Bash
Simplicity: Bash scripts are easy to write and understand. They don’t require complex setup or dependencies, making them ideal for quick automation tasks.
Flexibility: With Bash, I could easily integrate various Helm commands and Kubernetes configurations and even clone Git repositories, it also uses the latest version of the CLI's.
Reliability: The script provided a reliable way to deploy the Helm charts, overcoming the limitations I faced with Go.
Conclusion
While encountering challenges is inevitable in software development, finding alternative solutions is crucial. In this case, Bash proved to be a reliable and efficient tool, allowing me to complete the Helm deployments on time. This experience reinforced the importance of having multiple tools in your arsenal and being adaptable in the face of obstacles.
If you ever find yourself in a similar situation, don’t underestimate yourself and the knowledge you have.