• AWS Cloud
  • DevOps
  • Kubernetes
  • Microservices
  • Terraform
  • Ansible
  • Blog
    Login

    Have a question?  1-800-690-2675  [email protected]

    CloudNative and MicroservicesCloudNative and Microservices
    • AWS Cloud
    • DevOps
    • Kubernetes
    • Microservices
    • Terraform
    • Ansible
    • Blog

      Kubernetes

      • Home
      • Blog
      • Kubernetes
      • How to Create a Kubernetes Service

      How to Create a Kubernetes Service

      • Posted by Damian Igbe
      • Categories Kubernetes
      • Date September 27, 2017

      A  Kubernetes service is a very important abstraction to realise cloud-native microservice applications. It is used as a frontend to a set of pods – pods are ephemeral and can die at any time and that is ok. Recall from here that  we used deployment to monitor our pods to create new ones whenever pods die or disappear. To cope with the ephemeral nature of pods, we need something that is not ephemeral.Something that is always there (permanent) and can be used until deleted. This is similar to the concept of a load balancer that acts as the frontend to a set of backend services by exposing a virtual IP (see type of kubernetes services below). Kubernetes services are used by humans and other Kubernetes services to connect to pods behind the services.

      Creating a Kubernetes Service

      A kubernetes service uses Kubernetes Labels to identify the pods to connect to. It’s always a good practice to create a service before creating the pods. The order matters especially when environmental variables are used for name resolution of the services. The other method of name resolution is the DNS service but with DNS, the order does’nt really matter. Let’s use the manifest files below to create a service and a deployment. For the nginx pods we’ll use the deployment manifest that was created here, posted below:

      apiVersion: apps/v1beta1 
      metadata:
        name: deployment-example
      spec:
        replicas: 5
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx:1.9.2
              ports:
              - containerPort: 80
              resources:           
                requests:                
                  cpu: 100m             
                  memory: 100Mi

      In this deployment manifest file, 5 nginx containers will be created and identified with the label app:nginx. Note that you have the option to combine the 2 manifest files into one, delimited by  – – but here we are creating them separately.

      apiVersion: v1
      kind: Service
      metadata:
        name: nginx
        labels:
          app: nginx
      spec:
        ports:
          - name: web-frontend
            port: 80
        selector:
          app: nginx
      

      Now that we have the 2 manifest files, let‘s create a service followed by creating a deployment.

      $kubectl create -f service.yaml
      
      $kubectl create -f deployment.yaml
      
      $kubectl get svc
      NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
      kubernetes   10.0.0.1     <none>        443/TCP   5d
      nginx        10.0.0.89    <none>        80/TCP    16m
      
      $kubectl describe svc nginx
      Name:            nginx
      Namespace:        default
      Labels:            app=nginx
      Annotations:        <none>
      Selector:        app=nginx
      Type:            ClusterIP
      IP:            10.0.0.89
      Port:            web    80/TCP
      Endpoints:        172.17.0.10:80,172.17.0.3:80,172.17.0.5:80 + 2 more...
      Session Affinity:    None
      Events:            <none>

      Here we see that the service has been created and mapped to the 5 pods that were created using deployment. To check that the IP addresses under Endpoints match the IP addresses of the pods connected to the service, lets run this command.

      kubectl get pods -l app=nginx -o yaml | grep podIP
          podIP: 172.17.0.5
          podIP: 172.17.0.7
          podIP: 172.17.0.10
          podIP: 172.17.0.3
          podIP: 172.17.0.9

      In this case, the service  selects all the matching Pods based on app=nginx labels along with their IP addresses and we see that they match with the IP addresses in the Endpoints section of the service.

      Discovering services

      Environment variables and DNS are the  2 primary modes of finding a Service.

      Environment variables

      When a Pod is run on a Node, the kubelet adds a set of environment variables to the container for each active Service.For example, the Service “nginx” which exposes TCP port 80 and has been allocated cluster IP address of 10.0.0.89 produces the following environment variables:

      $kubectl exec deployment-example-1421084195-0smn4 env
      NGINX_PORT=tcp://10.0.0.89:80
      NGINX_PORT_80_TCP_PROTO=tcp
      NGINX_SERVICE_HOST=10.0.0.89
      NGINX_PORT_80_TCP=tcp://10.0.0.89:80
      NGINX_PORT_80_TCP_PORT=80
      NGINX_PORT_80_TCP_ADDR=10.0.0.89
      NGINX_SERVICE_PORT=80
      NGINX_SERVICE_PORT_WEB=80

      As stated above any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this restriction.

      DNS

      DNS is an optional cluster add-on though strongly recommended. When installed, the DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.

      For example, for the’ nginx’ service created above in the Kubernetes ‘default’ namespace a DNS record for “nginx.default” is created. Pods which exist in the “default” Namespace should be able to find it by simply doing a name lookup for “nginx”. Pods which exist in other Namespaces must qualify the name as “default.nginx”. The result of these name lookups is the cluster IP.

       Types of Kubernetes Services

      There are currently 4 types of Kubernetes services as depicted in Kubernetes  documentation.

      • ClusterIP: Exposes the service on a cluster-internal IP. ClusterIP  makes the service only reachable from within the cluster. This is the default ServiceType.
      • NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). the Kubernetes master will allocate a port from a flag-configured range (default: 30000-32767).A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
      • LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
      • ExternalName: Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. The Kubernetes DNS server is the only way to access services of type ExternalName.

      Kubernetes ServiceTypes allow you to specify what kind of service you want. The default is ClusterIP.

      Type LoadBalancer

      If the cloud providers support external load balancers, a load balancer can be provisioned by setting the type field to “LoadBalancer”. Here is an example:

      kind: Service
      apiVersion: v1
      metadata:
        name: nginx-service
      spec:
        selector:
          app: nginx
        ports:
        - protocol: TCP
          port: 80
          targetPort: 8000
          nodePort: 2500
        clusterIP: 10.0.0.200
        loadBalancerIP: 78.11.13.17
        type: LoadBalancer
      

      Note the following:

      • Traffic from the external load balancer will be directed at the backend Pods, though exactly how that works depends on the cloud provider.
      • Some cloud providers allow the loadBalancerIP to be specified. If that is the case, the load-balancer is  created with the user-specified loadBalancerIP. When  loadBalancerIP field is not specified, an ephemeral IP is assigned to the loadBalancer. If the loadBalancerIP is specified, but the cloud provider does not support the feature, the field is ignored. For more details refer to Kubernetes  documentation.

      Conclusion

      In this tutorial, you learnt how to create a Kubernetes Service object. A service presents a permanent face to ephemeral pods. There are different types of services that can be created with ClusterIP being the default.

      • Share:
      author avatar
      Damian Igbe
      Damian holds a PhD in Computer Science and has decades of experience in Information Technology and Cloud services. Damian holds a couple of certifications including AWS Certified Solutions Architect- Associate, AWS Certified Developer-Associate and AWS Certified SysOp-Associate. He is the founder and CTO of Cloud Technology Experts. When not writing or teaching or consulting, Damian likes running and spending time with the family.

      Previous post

      How to Create a Kubernetes Deployment Controller object
      September 27, 2017

      Next post

      How to Create Kubernetes ConfigMaps
      September 29, 2017

      You may also like

      cte-persistent-volume-with-rook
      Kubernetes Persistent Volume with Rook
      23 March, 2019
      deploy premetheus on kubernetes
      Deploy Prometheus on Kubernetes
      15 March, 2018
      kops on google cloud-1
      Kubernetes on Google Cloud with KOPS
      31 January, 2018

        1 Comment

      1. SANJAY GARG
        September 17, 2018
        Reply

        Hi Damin…
        do you have a link to some demo for LoadBalancer as a service which I can see closely.
        from where ” loadBalancerIP: 78.11.13.17″ this IP came in the LB definition above Type LoadBalancer service.

      Leave A Reply Cancel reply

      Your email address will not be published. Required fields are marked *

      Search

      Categories

      • Cloud Automation
      • Cloud Security
      • Cloud-native
      • General
      • HA & Autoscaling
      • Kubernetes
      • Kubernetes Volumes
      • Monitoring
      • Public Cloud

      Latest Courses

      LPI Linux Essentials

      Free

      AWS Certified Cloud Practitioner

      $300.00 $275.00

      Kubernetes Certified Administrator

      $275.00

      Training, Consulting & Research
      © 2016-2020 CTE, All Rights Reserved.
      14330 Midway Rd, Suite 211, Farmers Branch, TX 75244

      No apps configured. Please contact your administrator.

      Login with your site account

      No apps configured. Please contact your administrator.

      Lost your password?