• AWS Cloud
  • DevOps
  • Kubernetes
  • Microservices
  • Terraform
  • Ansible
  • Blog
    RegisterLogin

    Have a question?  1-800-690-2675  [email protected]

    CloudNative and MicroservicesCloudNative and Microservices
    • AWS Cloud
    • DevOps
    • Kubernetes
    • Microservices
    • Terraform
    • Ansible
    • Blog

      Public Cloud

      • Home
      • Blog
      • Public Cloud
      • Accessing Microservices with the Kubernetes Service Object

      Accessing Microservices with the Kubernetes Service Object

      • Posted by Damian Igbe
      • Categories Public Cloud
      • Date July 22, 2020

      This is part 4 of the series on Managing Microservices with Kubernetes. You can read part 1 here    and part 2 here     and part 3 here.

      In part 1, we understood how Kubernetes is used to deploy a Microservice. In that blog, I mentioned that a couple of Kubernetes objects are used to deploy the voting application – Namespaces, Labels and Selectors, Pods, ReplicaSets, Deployment, and Service Objects.   In part 2 we explored some of the Kubernetes objects used in building the microservice application. We also explored scaling a microservices application using the ReplicaSet Controller object. in Part 3 we explored using the Deployment Object to scale the microservice application.

      In this blog, I will explore the Service Object. The service object is very important as it aids in accessing your microservice application.

      The Service Object:

      The voting app is now up and running since deployed using the Deployment object. The next question is how do we access the application? As we scale the application, we end up with several replicas of the application, but we need to address all the replicas as a single application and not individually. Moreover, Kubernetes pods are ephemeral and as you scale your microservice in and out, you can not determine which pods will remain after each scaling activity. Without a persistent object in front of the pods, it would be impossible to access the microservice. The traditional solution is to have a Load Balancer in front of all the replicas. The Loadbalancer is called the Service object in Kubernetes. This is illustrated below.

      As you access the LoadBalancer, it directs traffic to the individual pods. In this way, if any of the pods is down, it will divert traffic to the pods that are up and running. This increases the availability and scalability of the application.

      The service object complements the Deployment object. Deployment object answers the questions of how do I get my application deployed and how do I manage it on a day-to-day basis? The Service object answers the question of how do I get my customers to access and use my application? The two objects are therefore very important and when you look at any microservice application, it will mostly consist of a Deployment and a Service object per microservice. For example, if a microservice application consists of 5 microservices, you will mostly find 5 deployment and 5 Service objects to manage the application. This is the case with the voting-app microservice that we are looking at in these series of blogs.

      They are different types of service objects. The main differentiator between them is their scope of reachability. The 3 different types of service objects are:

      • ClusterIP – reachable only within the Kubernetes cluster
      • NodePort – reachable based on where the IP addresses of the cluster are reachable. The IP of the cluster node could be reachable only from within the data center or from the internet if the cluster is deployed on a public cloud.
      • LoadBalancer – reachable from anywhere in the world though could also be reachable only within the Datacenter.

      Each service type has its own use cases. For example in a 2-tired application, the database can be reachable from the ClusterIp (local) whereas the web front end can be reachable from the LoadBalancer (global/internet reach).

      NodePort is used mostly for development and testing purposes and not recommended for production workloads. Its scope of reachability depends on the IP addresses of the cluster nodes. For example, if you deployed your cluster on a public cloud platform like AWS, you can use the public IP address of your EC2 instances to access your application from anywhere in the world. However, if the IP address changes, you will have to deal with that. Also, it could be a security threat to expose the IPs of your cluster to the world.

      LodBalancer service type is designed for production use cases. If your cluster runs on a public cloud like AWS, you can create a Load balancer (AWS Elastic Load Balancer) from AWS and attach it to the voting-app application. This assumes that your cluster has a cloud-provider addon when it was deployed. Deployment tools like KOPS can help you deploy the cloud-provider addon. For more information on how to deploy KOPs, take a look at this blog. For on-premise deployments and deployments in the public cloud that does not have a cluster-addon capability, you need to deploy a load balancer backend before you can use the service object of type LoadBalancer. One such popular load balancer backend is called MetalLB https://metallb.universe.tf/

      The service types are built on each other.  Every service object created must have a ClusterIP even if is of type NodePort or Load balancer. And every LoadBalancer must have both ClusterIP and NodePort IPs.

      Let us see the service types that voting-app uses.

      kubectl get svc -n vote
      NAME     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
      db       ClusterIP   10.107.229.158   <none>        5432/TCP         18h
      redis    ClusterIP   10.96.46.127     <none>        6379/TCP         18h
      result   NodePort    10.109.207.1     <none>        5001:31001/TCP   18h
      vote     NodePort    10.105.184.16    <none>        5000:31000/TCP   18h

      Here we see that db and Redis services are of type ClusterIP while result and vote services are of type NodePort.  This makes sense because the users don’t need direct access to the db and Redis services which are both considered backend services. However, users need to access the front-end services like vote-app and they also need to view the voting results hence vote and result services are of type NodePort. Note here that we cannot use Type LoadBalancer because the cluster where the microservice application is running has no cloud-provider addon. Also note that the 5th microservice, worker, does not have a service object. This means that everything it needs to communicate with the other microservices is hardcoded.

      Service Discovery

      Service discovery is an important feature of the architecture of any microservices application. When services are created they need to register themselves so that other objects can find them, without necessarily knowing the IP addresses of the services. Kubernetes uses a DNS service (kubedns or coredns) for service discovery. Whenever a Pod is created, the IP address (or the service ip) of the DNS service is inserted into the pod.

      Let us check the DNS service that this cluster is using. We start by obtaining the deployment object, the pod object, the service objects, and the service endpoints.

      [email protected]:~$ kubectl get deploy -n kube-system -o wide | grep dns 
      coredns          2/2     2            2           67d   coredns          k8s.gcr.io/coredns:1.6.7     k8s-app=kube-dns
      
      [email protected]:~$ kubectl get pods -n kube-system -o wide | grep dns 
      coredns-66bff467f8-6n6lk          1/1     Running   6          24d     10.32.0.4      master1   <none>     <none>
      coredns-66bff467f8-7l6cz          1/1     Running   6          24d     10.32.0.2      master1   <none>      <none>
      
      [email protected]:~$ kubectl get svc -n kube-system | grep dns
      NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE
      kube-dns                             ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP         66d
      
      [email protected]:~$ kubectl get endpoints -n kube-system | grep dns
      kube-dns                             10.32.0.2:53,10.32.0.4:53,10.32.0.2:9153 + 3 more...                 67d
      [email protected]:~$ 
      

      Here we see that the cluster is using CoreDNS running 2 pods exposed on 3 ports each, making 6 endpoints in total as can be seen from the output of kubectl get endpoints.  The service name is kube-dns. Let us login into any of the pods and check the DNS service for that pod and we will see that the service IP 10.96.0.10 is inserted into each pod.

      [email protected]:~/example-voting-app$ kubectl get pods -n vote
      NAME                      READY   STATUS    RESTARTS   AGE
      db-6789fcc76c-zdrxf       1/1     Running   0          78s
      redis-554668f9bf-wfs6n    1/1     Running   0          78s
      result-79bf6bc748-gzhsj   1/1     Running   0          78s
      vote-f4snc                1/1     Running   0          77s
      worker-dd46d7584-rwrnl    1/1     Running   1          76s
      
      [email protected]:~/example-voting-app$ kubectl exec -it vote-f4snc cat /etc/resolv.conf -n vote
      kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
      nameserver 10.96.0.10
      search vote.svc.cluster.local svc.cluster.local cluster.local tx.rr.com
      options ndots:5
      
      
      [email protected]:~/example-voting-app$ kubectl exec -it result-79bf6bc748-gzhsj cat /etc/resolv.conf -n vote
      kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
      nameserver 10.96.0.10
      search vote.svc.cluster.local svc.cluster.local cluster.local tx.rr.com
      options ndots:5
      [email protected]:~/example-voting-app$

      Above, we checked /etc/resolv.conf of each of the pods and confirmed that they are all using the service ip of  10.96.0.10 as the DNS service.

      Let us confirm that inside the pod the DNS service is used to resolve the service IPs of the services. Note that you may need to install dnsutils (on Ubuntu, do sudo apt install dnsutils -y) to be able to use nslookup.

      [email protected]:~/example-voting-app$ kubectl exec -it result-79bf6bc748-gzhsj /bin/bash -n vote
      
      [email protected]:/app# nslookup db
      Server: 10.96.0.10
      Address: 10.96.0.10#53
      
      Name: db.vote.svc.cluster.local
      Address: 10.111.242.86
      
      [email protected]:/app#
      [email protected]:~/example-voting-app$ kubectl get svc -n vote
      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      db ClusterIP 10.111.242.86 <none> 5432/TCP 15m
      redis ClusterIP 10.105.95.8 <none> 6379/TCP 15m
      result NodePort 10.109.186.188 <none> 5001:31001/TCP 15m
      vote NodePort 10.110.240.140 <none> 5000:31000/TCP 15m
      

      Above we confirmed that the DNS service is used to resolve the IP address of the db service which is 10.111.242.86

      The diagram below is the architecture of how the vote-app microservices communicate using different service objects. The diagram illustrates how service discovery is used throughout the topology of the microservice application.

      This is the traffic flow that occurs when a user accesses the vote app to vote and also when a user accesses the result service to obtain the vote result.

      Step1: A user access the vote-app service endpoint. The service object randomly selects one of the endpoint vote pods to respond to the user.

      Step2: Vote pod uses Service discovery to find the Redis service and store the user vote on a randomly selected Redis Pod

      Step3: Worker pod uses service discovery to find Redis service and DB service. It continuously polls the Redis service for a new vote.

      Step 4: If a new vote is found, it connects to the db service and stores the vote in the db pod.

      Step 5: When a user accesses the result service, the result pod uses service discovery to find the DB service and retrieves the vote result from the DB pod. The vote results are displayed for the user.

      Kubernetes can also use Environmental variables to discover the services but DNS is the best method and certainly the one used in the vote app microservice.

      How the Service Object works:

      The Service object uses labels to identify the pods to route traffic. The pods are called the endpoints. In this YAML file, this Service object will identify any pod with the label app: vote as indicated in red:

      apiVersion: v1
      kind: Service
      metadata:
        labels:
          app: vote
        name: vote
        namespace: vote
      spec:
        type: NodePort
        ports:
        - name: "vote-service"
          port: 5000
          targetPort: 80
          nodePort: 31000
        selector:
          app: vote

      The Service will continue to track Pod objects as they scale-out or scale down: For example in the 2 listings below, listing 1 shows that each of the 5 microservices has one Pod running. The corresponding services, therefore, identified each of the corresponding pods in its endpoint. The service object uses labels to do this.

      [email protected]:~/example-voting-app/k8s-specifications$ kubectl get pods -n vote
      
      NAME                      READY   STATUS    RESTARTS   AGE
      db-6789fcc76c-scvxx       1/1     Running   0          14h
      redis-554668f9bf-jbx92    1/1     Running   0          14h
      result-79bf6bc748-tl5ll   1/1     Running   50         14h
      vote-786ddfdc65-nlxx6     1/1     Running   0          14h
      worker-dd46d7584-t2g5k    1/1     Running   2          14h
      
      [email protected]:~/example-voting-app/k8s-specifications$ kubectl get endpoints -n vote
      
      NAME     ENDPOINTS        AGE
      db       10.39.0.3:5432   14h
      redis    10.39.0.4:6379   14h
      result   10.47.0.3:80     14h
      vote     10.36.0.1:80     14h
      
      

      As we scale the vote service, you will observe that the vote service uses labels app=vote to identify all the corresponding Pods.

      [email protected]:~/example-voting-app/k8s-specifications$ kubectl scale deployment vote --replicas=5 -n vote
      deployment.apps/vote scaled
      
      [email protected]:~/example-voting-app/k8s-specifications$ kubectl get pods -n vote --show-labels NAME                     
       READY   STATUS    RESTARTS   AGE     LABELS db-6789fcc76c-scvxx       
      1/1     Running   0          14h     app=db,pod-template-hash=6789fcc76c redis-554668f9bf-jbx92    
      1/1     Running   0          14h     app=redis,pod-template-hash=554668f9bf result-79bf6bc748-tl5ll   
      1/1     Running   50         14h     app=result,pod-template-hash=79bf6bc748 vote-786ddfdc65-4gzbb     
      1/1     Running   0          2m54s   app=vote,pod-template-hash=786ddfdc65 vote-786ddfdc65-nlxx6     
      1/1     Running   0          14h     app=vote,pod-template-hash=786ddfdc65 vote-786ddfdc65-pgzsx     
      1/1     Running   0          2m54s   app=vote,pod-template-hash=786ddfdc65 vote-786ddfdc65-pszxj     
      1/1     Running   0          2m54s   app=vote,pod-template-hash=786ddfdc65 vote-786ddfdc65-rvrqt     
      1/1     Running   0          2m54s   app=vote,pod-template-hash=786ddfdc65 worker-dd46d7584-t2g5k    
      1/1     Running   2          14h     app=worker,pod-template-hash=dd46d7584
      [email protected]:~/example-voting-app/k8s-specifications$ kubectl get endpoints -n vote
      
      NAME     ENDPOINTS                                            AGE
      db       10.39.0.3:5432                                       14h
      redis    10.39.0.4:6379                                       14h
      result   10.47.0.3:80                                         14h
      vote     10.36.0.1:80,10.39.0.1:80,10.44.0.1:80 + 2 more...   14h
      
      

      The kubectl get endpoint output indicates that we now have 5 endpoints after we scaled the application. The service uses labels to identify the vote pods.

      Conclusion

      In summary, while the  Deployment object is used to create and maintain the microservices, the Service object exposes the microservices for both local and global access. The type of service object( ClusterIP, NodePort, LoadBalancer) determines the scope of reachability of the microservice. Service Type LoadBalancer is used mainly for exposure to the Internet. NodePort is best for testing purposes especially when you don’t have access to type LoadBalancer while ClusterIp is used mainly for communication within the cluster. In the next blog, I will explore how to use the Deployment and Service objects to implement basic CI/CD using Blue-Green deployment strategy.

      • Share:
      author avatar
      Damian Igbe
      Damian holds a PhD in Computer Science and has decades of experience in Information Technology and Cloud services. Damian holds a couple of certifications including AWS Certified Solutions Architect- Associate, AWS Certified Developer-Associate and AWS Certified SysOp-Associate. He is the founder and CTO of Cloud Technology Experts. When not writing or teaching or consulting, Damian likes running and spending time with the family.

      Previous post

      Scaling Microservices with the Kubernetes Deployment Object
      July 22, 2020

      Next post

      CI/CD of Microservices in Kubernetes
      August 9, 2020

      You may also like

      kubernetes-networking
      Understanding Networking of Microservices Applications
      29 August, 2020
      ci-cd-blog-5
      CI/CD of Microservices in Kubernetes
      9 August, 2020
      deployment-blue-green-canary
      Scaling Microservices with the Kubernetes Deployment Object
      20 July, 2020

      Leave A Reply Cancel reply

      Your email address will not be published. Required fields are marked *

      Search

      Categories

      • Cloud Automation
      • Cloud Security
      • Cloud-native
      • General
      • HA & Autoscaling
      • Kubernetes
      • Kubernetes Volumes
      • Monitoring
      • Public Cloud

      Latest Courses

      LPI Linux Essentials

      Free

      AWS Certified Cloud Practitioner

      $300.00 $275.00

      Kubernetes Certified Administrator

      $275.00

      Training, Consulting & Research
      © 2016-2020 CTE, All Rights Reserved.
      14330 Midway Rd, Suite 211, Farmers Branch, TX 75244

      No apps configured. Please contact your administrator.

      Login with your site account

      No apps configured. Please contact your administrator.

      Lost your password?

      Not a member yet? Register now

      Register a new account

      Are you a member? Login now