• AWS Cloud
  • DevOps
  • Kubernetes
  • Microservices
  • Terraform
  • Ansible
  • Blog
    RegisterLogin

    Have a question?  1-800-690-2675  [email protected]

    CloudNative and MicroservicesCloudNative and Microservices
    • AWS Cloud
    • DevOps
    • Kubernetes
    • Microservices
    • Terraform
    • Ansible
    • Blog

      Public Cloud

      • Home
      • Blog
      • Public Cloud
      • CI/CD of Microservices in Kubernetes

      CI/CD of Microservices in Kubernetes

      • Posted by Damian Igbe
      • Categories Public Cloud
      • Date August 9, 2020

      This is part 5 of the series on Managing Microservices with Kubernetes. You can read part 1 here, part 2 here, part 3 here,   and part 4 here.

      In part 1, we understood how Kubernetes is used to deploy a Microservice. In that blog, I mentioned that a couple of Kubernetes objects are used to deploy the voting application – Namespaces, Labels and Selectors, Pods, ReplicaSets, Deployment, and Service Objects.   In part 2 we explored some of the Kubernetes objects used in building the microservice application. We also explored scaling a microservices application using the ReplicaSet Controller object. in Part 3 we explored using the Deployment Object to scale the microservice application. In part 4 we explored the Kubernetes service object.

      In this blog, I will explore Rolling update, Blue/Green, and Canary Deployments. These are strategies used to maintain web applications 24/7. To maintain web applications 24/7 DevOps teams have to adopt methodologies that help them achieve the objective.  DevOps teams need to make frequent software updates to add new features, add security patches, without any disruption to the availability of the software. These practices are generally referred to as Continuous Integration (CI) and Continuous Deployment (CI/CD).

      Kubernetes out of the box makes it easy to do CI/CD without using any additional tool. For some scenarios, however, external tools may be required to implement CI/CD. Below, I will discuss how you can implement CI/CD using Kubernetes features.

      Rolling Update:

      With the rolling update feature, developers can easily update the application from one version to another while the application is live.  Developers package the new version of the application into a Docker image and once the image is ready, they can implement a Rolling update. As the name indicates,  when you have several replicas of the application running, a rolling update strategy will continue to replace the images of the replicas one after another. At any point in time, during the rolling updates, some pods will be running the new version of the application while some pods are still running the old image. Gradually, all the pods with the old images will be replaced and only the pods with the new images will be left running. You can specify how the rolling update will be carried out like specifying how many new pods can be added at a time.

      Here, l will perform a rolling update on the vote-app application. While the application is up and running I will update the image to the latest version on the fly. The command to do that is below:

      kubectl set image deployment/vote vote=dockersamples/examplevotingapp_vote:latest -n vote

      Note that in vote=dockersamples/examplevotingapp_vote:latest, vote is the container name  in the deployment yaml file

      [email protected]:~/example-voting-app/k8s-specifications$ kubectl get pods -n vote
      NAME                      READY   STATUS              RESTARTS   AGE
      db-6789fcc76c-kkfnk       1/1     Running             0          17h
      redis-554668f9bf-7qt2x    1/1     Running             0          17h
      result-79bf6bc748-rhrv8   1/1     Running             60         17h
      vote-7478984bfb-9cd48     0/1     Terminating         0          37m
      vote-7478984bfb-wwcv5     0/1     Terminating         0          37m
      vote-7478984bfb-xps5h     1/1     Terminating         0          38m
      vote-75959b44c6-4bjg4     1/1     Running             0          20s
      vote-75959b44c6-4wgtq     1/1     Running             0          20s
      vote-75959b44c6-94ml7     0/1     ContainerCreating   0          10s
      vote-75959b44c6-g6g6s     1/1     Running             0          20s
      vote-75959b44c6-jsqms     1/1     Running             0          9s
      worker-dd46d7584-4dzzx    1/1     Running             1          17h

      Here, we had 5 replicas of the vote microservices and after updating the image to the latest version, the pods were updated one after another. It does it in such a way that the application is still online and responsive as the upgrade happens live. This is simple and easy to implement but the disadvantage here is that things can easily go wrong, but you can undo the previous version if things go wrong.

      Rolling Back  a Rolling Update

      With the rolling update, you are performing the update on a live system so it would be good to be able to restore the system to its original version if things go wrong. Rollback enables you to restore the system back to the original version. Here I will perform a rolling update but first, let us confirm the current image.

      [email protected]:~/example-voting-app/k8s-specifications$ kubectl get pods -n vote
      NAME                      READY   STATUS    RESTARTS   AGE
      db-6789fcc76c-kkfnk       1/1     Running   0          17h
      redis-554668f9bf-7qt2x    1/1     Running   0          17h
      result-79bf6bc748-rhrv8   1/1     Running   61         17h
      vote-75959b44c6-4bjg4     1/1     Running   0          26m
      vote-75959b44c6-4wgtq     1/1     Running   0          26m
      vote-75959b44c6-94ml7     1/1     Running   0          26m
      vote-75959b44c6-g6g6s     1/1     Running   0          26m
      vote-75959b44c6-jsqms     1/1     Running   0          26m
      
      [email protected]:~/example-voting-app/k8s-specifications$ kubectl describe pod vote-75959b44c6-4bjg4 -n vote
      …
      Containers:
        vote:
          Container ID:   docker://68b2e1e7b9579877039f34199ce47f777a7a7a71ed5af88e66dc05ff9452ff86
          Image:          dockersamples/examplevotingapp_vote:latest
          Image ID:       docker-pullable://dockersamples/[email protected]:b4e60557febfed6d345a09e5dce52aeeff997b7c
      …

      The current image is shown in red above. Now that we have confirmed the image used by the containers, let’s do a rollback by performing the following procedures.

      [email protected]:~/example-voting-app/k8s-specifications$ kubectl rollout history deployment vote -n vote
      deployment.apps/vote
      REVISION  CHANGE-CAUSE
      1         <none>
      2         <none>
      
      [email protected]:~$kubectl rollout undo deployment/vote --to-revision=1 -n vote
      deployment.apps/vote rolled back
      
      [email protected]:~/example-voting-app/k8s-specifications$ kubectl get pods -n vote
      NAME                      READY   STATUS              RESTARTS   AGE
      db-6789fcc76c-kkfnk       1/1     Running             0          17h
      redis-554668f9bf-7qt2x    1/1     Running             0          17h
      result-79bf6bc748-rhrv8   1/1     Running             62         17h
      vote-7478984bfb-5scxb     0/1     ContainerCreating   0          7s
      vote-7478984bfb-6jwhq     0/1     ContainerCreating   0          1s
      vote-7478984bfb-csb2b     0/1     ContainerCreating   0          7s
      vote-7478984bfb-ctx4r     1/1     Running             0          7s
      vote-75959b44c6-4bjg4     1/1     Running             0          34m
      vote-75959b44c6-4wgtq     1/1     Running             0          34m
      vote-75959b44c6-94ml7     1/1     Terminating         0          34m
      vote-75959b44c6-g6g6s     1/1     Running             0          34m
      vote-75959b44c6-jsqms     0/1     Terminating         0          34m
      worker-dd46d7584-4dzzx    1/1     Running             1          17h
      
      [email protected]:~$kubectl describe pod vote-7478984bfb-5scxb -n vote
      Containers:
        vote:
          Container ID:   docker://0a0ddafd3e99fce866aac39fac93e939b2cea2f2e735cf2507f76821485cac35
          Image:          dockersamples/examplevotingapp_vote:before
          Image ID:       docker-pullable://dockersamples/[email protected]:8e64b18b2c87de902f2b72321c89b4af4e2b942d76d0b772532ff27ec4c6ebf6
      

      Above, I viewed the revision history and discovered that they were only 2 revisions. I then reverted to version1 which was the version using the :before image tag. After that, I described the pod and it shows that the rollback has taken place.

      To make the command kubectl rollout history deployment vote -n vote more meaningful, you can add –record to the kubectl create command

      [email protected]:~$kubectl create -f vote-deployment.yaml –record
      
      

      Summary Features of Rolling Update Deployment

      • 1 active deployment
      • 1 service object (but not used in the Rolling update)
      • role back if things go wrong

      Blue/Green Deployment:

      Earlier we performed a rolling update with just one Deployment object, and we did not have to use the Service object.  The live production application was upgraded to a newer image version. Rolling update capability is already inbuilt in Kubernetes and it is quite simple to implement. Here we will use another CI/CD strategy called Blue/Green Deployments. Here we use:

      • 2 parallel deployments (Blue and Green deployments)
      • 1 Service object

      The diagram for Blue/Green Deployment looks as below:

       

      In Blue/Green deployment, one service is live (Blue) while the other deployment is offline(Green). New changes and testing are done on the Green offline Deployment. The advantages of Blue/Green deployment as compared to rolling update is that we don’t touch the live production system until we’re really sure that our offline application is fully tested and ready for production. At this stage, we edit the service object and change the labels from Blue to Green.  As explained earlier in part 4, the Service Object uses labels to select the endpoints/pods to direct traffic to so by changing the labels from Blue to Green, we are redirecting traffic from Blue to Green. By doing this, the Deployment objects have switched their roles. The previous Green is now the Blue and the previous Blue is now the Green. We can now start further development work on the New Green and the cycle repeats. This is safe but more expensive than a rolling update as here you have to maintain 2 parallel deployments.

      Summary Features of Blue/Green Deployments

      • 2 parallel deployments (Blue and Green)
      • 1 service object
      • Only one deployment is active at a time
      • By changing the labels in the service object, the active deployment switches to the other deployment

       

      Canary Deployment:

      Canary deployment is very similar to Blue/Green deployment but with a little difference. In Canary deployment, we also need:

      • 2 parallel deployments (Blue and Green)
      • 1 Service object

      The diagram for Canary Deployment looks as below:

      The main difference between Canary and Blue/Green deployment is that both Blue and Green deployments are active at the same time but the proportion of traffic going to both of them differs. You can decide to route 70% traffic to blue and 30% traffic to green.

      But why have both deployments active at the same time?  This is so that the green deployment(smaller traffic)  can be used for testing new features of an application before a full rollout to production. It is often a way to get certain customers to test the new features of an application to see if they meet the expectations of the customers requesting the new features.  Once the features have been confirmed functional:

      • The Green deployment (smaller traffic)  is scaled out to meet production workload while the Blue deployment is set offline

      Or

      • The Blue deployment (large deployment ) is upgraded and scaled out  to production workload while the blue deployment switched off

      At this point, only one Deployment is active and running the newer version of the application.

      Summary Features of Canary Deployment

      • 2 parallel deployments (Blue and Green)
      • 1 service object
      • Both Deployments active at the same time
      • Traffic to the 2 deployments are split into different percentages (e.g 70-30)
      • once the newer image is confirmed to meet requirements, only one deployment is left running

       

      Conclusion

      Here I have enumerated several strategies that you can use to keep your microservice application running 24/7 while continuously updating your application to new features by using principles of CI/CD. With Rolling updates deployment, you can update your application on the fly and if things go wrong you can roll back to the previous version.

      With Blue/Green deployment, you use 2 different deployments but only one is active at any point. Updates are done on the offline deployment and when it is stable, the active deployment is switched to the new deployment.

      Canary deployment is similar to the Blue/Green deployment but traffic is split in proportion to the 2 deployments. The active/stable deployment receives more traffic while the smaller deployment is used to test certain features of the application.

      Which of the strategies should I use? Well, it depends on the team and which one they feel more comfortable with in terms of their skillsets and suitability for the application they are working on. Still, some teams can decide to use multiple strategies at the same time. The rolling update is simpler and cheaper to implement but can be riskier if it fails though you can always rollback to a working version.Blue/Green and Canary are safer as you are not working on the live version but it can be more expensive as you need a lot more resources to run the parallel deployments. Note that the use of blue/green is arbitrary. You can use red/black or any other colors of choice but blue/green is more commonly used. In this blog, blue is active, and green is offline.

      At the end of the day, all strategies help to keep an application running 24/7 so choose which one better meets your needs.

      • Share:
      author avatar
      Damian Igbe
      Damian holds a PhD in Computer Science and has decades of experience in Information Technology and Cloud services. Damian holds a couple of certifications including AWS Certified Solutions Architect- Associate, AWS Certified Developer-Associate and AWS Certified SysOp-Associate. He is the founder and CTO of Cloud Technology Experts. When not writing or teaching or consulting, Damian likes running and spending time with the family.

      Previous post

      Accessing Microservices with the Kubernetes Service Object
      August 9, 2020

      Next post

      Understanding Networking of Microservices Applications
      August 29, 2020

      You may also like

      kubernetes-networking
      Understanding Networking of Microservices Applications
      29 August, 2020
      service-object-kubernetes
      Accessing Microservices with the Kubernetes Service Object
      22 July, 2020
      deployment-blue-green-canary
      Scaling Microservices with the Kubernetes Deployment Object
      20 July, 2020

      Leave A Reply Cancel reply

      Your email address will not be published. Required fields are marked *

      Search

      Categories

      • Cloud Automation
      • Cloud Security
      • Cloud-native
      • General
      • HA & Autoscaling
      • Kubernetes
      • Kubernetes Volumes
      • Monitoring
      • Public Cloud

      Latest Courses

      LPI Linux Essentials

      Free

      AWS Certified Cloud Practitioner

      $300.00 $275.00

      Kubernetes Certified Administrator

      $275.00

      Training, Consulting & Research
      © 2016-2020 CTE, All Rights Reserved.
      14330 Midway Rd, Suite 211, Farmers Branch, TX 75244

      No apps configured. Please contact your administrator.

      Login with your site account

      No apps configured. Please contact your administrator.

      Lost your password?

      Not a member yet? Register now

      Register a new account

      Are you a member? Login now