Building a CI/CD on GCP with Kubernetes

Building a CI/CD on GCP with Kubernetes

Last year I have given a talk at Nexus User Conference 2018 on how to build a CI/CD pipeline from scratch on AWS to deploy Dockerized Microservices and Serverless Functions. You can read my previous Medium post for step by step guide:



In 2019 edition of Nexus User Conference, I have presented how to build a CI/CD workflow on GCP with GKE, Cloud Build and Infrastructure as Code tools such us Terraform & Packer. This post will walk you through how to create an automated end-to-end process to package a Go based web application in a Docker container image, and deploy that container image on a Google Kubernetes Engine cluster.



Google Cloud Build allows you to define your pipeline as code in a template file called cloudbuild.yaml (This definition file must be committed to the application’s code repository). The continuous integration pipeline is divided to multiple stages or steps:

  • Quality Test: check whether our code is well formatted and follows Go best practices.
  • Unit Test: launch unit tests. You could also output your coverage and validate that you’re meeting your code coverage requirements.
  • Security Test: inspects source code for common security vulnerabilities.
  • Build: build a Docker image based on Docker multi-stage feature.
  • Push: tag and store the artifact (Docker image) to a Docker private registry.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
steps:
- id: 'run quality test'
name: "golangci/golangci-lint"
args: ["golangci-lint","run"]

- id: 'run unit test'
name: 'gcr.io/cloud-builders/go'
args: ['test', 'app']
env: ['GOPATH=.']

- id: 'run security checks'
name: "securego/gosec"
args: ['app']
env: ['GOPATH=.']

- id: 'build image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'registry.serverlessmovies.com/mlabouardy/app:$SHORT_SHA', '.']

- id: 'login to nexus'
name: 'gcr.io/cloud-builders/docker'
args: ['login', 'registry.serverlessmovies.com', '-u', '${_NEXUS_USERNAME}', '-p', '${_NEXUS_PASSWORD}']

- id: 'tag image'
name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'registry.serverlessmovies.com/mlabouardy/app:$SHORT_SHA', 'registry.serverlessmovies.com/mlabouardy/app:latest']

- id: 'push image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'registry.serverlessmovies.com/mlabouardy/app:$SHORT_SHA']

- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'registry.serverlessmovies.com/mlabouardy/app:latest']

Now we have to connect the dots. We are going to add a build trigger to initiate our pipeline. To do this, you have to navigate to Cloud Build console and create a new Trigger. Fill the details as shown in the screenshot below and create the trigger.



Notice the usage of variables instead of hardcoding Nexus Registry credentials for security purposes.

A new Webhook will be created automatically in your GitHub repository to watch for changes:



All good! now everything is configured and you can push your features in your repository and the pipeline will jump to action.



One the CI finishes the Docker image will be pushed into the hosted Docker registry, if we jump back to Nexus Repository Manager, the image should be available:



Now the docker image is stored in a registry, we will deploy it to a Kubernetes cluster, so similarly we will create a Kubernetes cluster based on GKE using Terraform:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
resource "google_container_cluster" "cluster" {
name = "${var.environment}"
location = "${var.zone}"

remove_default_node_pool = true

initial_node_count = "${var.k8s_nodes}"

master_auth {
username = ""
password = ""

client_certificate_config {
issue_client_certificate = false
}
}
}

resource "google_container_node_pool" "pool" {
name = "k8s-node-pool-${var.environment}"
location = "${var.zone}"
cluster = "${google_container_cluster.cluster.name}"
node_count = "${var.k8s_nodes}"

node_config {
preemptible = true
machine_type = "${var.instance_type}"

metadata {
disable-legacy-endpoints = "true"
}

oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
}
}

Once the cluster is created, we will provision a new shell machine, and issue the below command to configure kubectl command-line tool to communicate with the cluster:

1
gcloud container clusters get-credentials CLUSTER_NAME --region REGION


Our image is stored in a private Docker repository. Hence, we need to generate credentials for K8s nodes to be able to pull the image from the private registry. Authenticate with the registry using docker login command. Then, create a Secret based on Docker credentials stored in config.json file (This file hold the authorization token)

1
2
3
4
5
docker login REGISTRY_URL -u USER -p PASSWORD

kubectl create secret generic nexus \
--from-file=.dockerconfigjson=/home/$USER/.docker/config.json \
--type=kubernetes.io/dockerconfigjson

Now we are ready to deploy our container:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: registry.serverlessmovies.com/mlabouardy/app:latest
ports:
- containerPort: 3000
imagePullPolicy: Always
imagePullSecrets:
- name: nexus

To pull the image from the private registry, Kubernetes needs credentials. The imagePullSecrets field in the configuration file specifies that Kubernetes should get the credentials from a Secret named nexus.

Run the following command to deploy your application, listening on port 3000:

1
kubectl apply -f deployment.yml

By default, the containers you run on GKE are not accessible from the Internet, because they do not have external IP addresses. You must explicitly expose your application to traffic from the Internet. I’m going to use the LoadBalancer type service for this demo. But you are free to use whatever you like.

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: Service
metadata:
name: app
spec:
ports:
- port: 80
targetPort: 3000
selector:
app: app
type: LoadBalancer

Once you’ve determined the external IP address for your application, copy the IP address.



Point your browser to that URL to check if your application is accessible:



Finally, to automatically deploy our changes to K8s cluster, we need to update the cloudbuild.yaml file to add continuous deployment steps. We will apply a rolling update to the existing deployment with an image update:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
- id: 'configure kubectl'
name: 'gcr.io/cloud-builders/gcloud'
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
- 'KUBECONFIG=/kube/config'
entrypoint: 'sh'
args:
- '-c'
- |
gcloud container clusters get-credentials "$${CLOUDSDK_CONTAINER_CLUSTER}" --zone "$${CLOUDSDK_COMPUTE_ZONE}"
volumes:
- name: 'kube'
path: /kube

- id: 'deploy to k8s'
name: 'gcr.io/cloud-builders/gcloud'
env:
- 'KUBECONFIG=/kube/config'
entrypoint: 'sh'
args:
- '-c'
- |
kubectl set image deployment/app app=registry.serverlessmovies.com/mlabouardy/app:$SHORT_SHA
volumes:
- name: 'kube'
path: /kube

Test it out by pushing some changes to your repository, within a minute or two, it should get pushed to your live infrastructure.



That’s it! You’ve just managed to build a solid CI/CD pipeline in GCP for whatever your application code may be.

You can take this workflow further and use GitFlow branching model to separate your deployment environments to test new changes and features without breaking your production:



Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy.

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×