Kubernetes – Quick Guide ”; Previous Next Kubernetes – Overview Kubernetes in an open source container management tool hosted by Cloud Native Computing Foundation (CNCF). This is also known as the enhanced version of Borg which was developed at Google to manage both long running processes and batch jobs, which was earlier handled by separate systems. Kubernetes comes with a capability of automating deployment, scaling of application, and operations of application containers across clusters. It is capable of creating container centric infrastructure. Features of Kubernetes Following are some of the important features of Kubernetes. Continues development, integration and deployment Containerized infrastructure Application-centric management Auto-scalable infrastructure Environment consistency across development testing and production Loosely coupled infrastructure, where each component can act as a separate unit Higher density of resource utilization Predictable infrastructure which is going to be created One of the key components of Kubernetes is, it can run application on clusters of physical and virtual machine infrastructure. It also has the capability to run applications on cloud. It helps in moving from host-centric infrastructure to container-centric infrastructure. Kubernetes – Architecture In this chapter, we will discuss the basic architecture of Kubernetes. Kubernetes – Cluster Architecture As seen in the following diagram, Kubernetes follows client-server architecture. Wherein, we have master installed on one machine and the node on separate Linux machines. The key components of master and node are defined in the following section. Kubernetes – Master Machine Components Following are the components of Kubernetes Master Machine. etcd It stores the configuration information which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It is accessible only by Kubernetes API server as it may have some sensitive information. It is a distributed key value Store which is accessible to all. API Server Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface, which means different tools and libraries can readily communicate with it. Kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API. Controller Manager This component is responsible for most of the collectors that regulates the state of cluster and performs a task. In general, it can be considered as a daemon which runs in nonterminating loop and is responsible for collecting and sending information to API server. It works toward getting the shared state of cluster and then make changes to bring the current status of the server to the desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoints, etc. Scheduler This is one of the key components of Kubernetes master. It is a service in master responsible for distributing the workload. It is responsible for tracking utilization of working load on cluster nodes and then placing the workload on which resources are available and accept the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating pod to new node. Kubernetes – Node Components Following are the key components of Node server which are necessary to communicate with Kubernetes master. Docker The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment. Kubelet Service This is a small service in each node responsible for relaying information to and from control plane service. It interacts with etcd store to read configuration details and wright values. This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc. Kubernetes Proxy Service This is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing. It makes sure that the networking environment is predictable and accessible and at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers’ health checkup, etc. Kubernetes – Master and Node Structure The following illustrations show the structure of Kubernetes Master and Node. Kubernetes – Setup It is important to set up the Virtual Datacenter (vDC) before setting up Kubernetes. This can be considered as a set of machines where they can communicate with each other via the network. For hands-on approach, you can set up vDC on PROFITBRICKS if you do not have a physical or cloud infrastructure set up. Once the IaaS setup on any cloud is complete, you need to configure the Master and the Node. Note − The setup is shown for Ubuntu machines. The same can be set up on other Linux machines as well. Prerequisites Installing Docker − Docker is required on all the instances of Kubernetes. Following are the steps to install the Docker. Step 1 − Log on to the machine with the root user account. Step 2 − Update the package information. Make sure that the apt package is working. Step 3 − Run the following commands. $ sudo apt-get update $ sudo apt-get install apt-transport-https ca-certificates Step 4 − Add the new GPG key. $ sudo apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net:80 –recv-keys 58118E89F3A912897C070ADBF76221572C52609D $ echo “deb https://apt.dockerproject.org/repo ubuntu-trusty main” | sudo tee /etc/apt/sources.list.d/docker.list Step 5 − Update the API package image. $ sudo apt-get update Once all the above tasks are complete, you can start with the actual installation of the Docker engine. However, before this you need to verify that the kernel version you are using is correct. Install Docker Engine Run the following commands to install the Docker engine. Step 1 − Logon to the machine. Step 2 − Update the package index. $
Category: kubernetes
Kubernetes – Monitoring
Kubernetes – Monitoring ”; Previous Next Monitoring is one of the key component for managing large clusters. For this, we have a number of tools. Monitoring with Prometheus It is a monitoring and alerting system. It was built at SoundCloud and was open sourced in 2012. It handles the multi-dimensional data very well. Prometheus has multiple components to participate in monitoring − Prometheus − It is the core component that scraps and stores data. Prometheus node explore − Gets the host level matrices and exposes them to Prometheus. Ranch-eye − is an haproxy and exposes cAdvisor stats to Prometheus. Grafana − Visualization of data. InfuxDB − Time series database specifically used to store data from rancher. Prom-ranch-exporter − It is a simple node.js application, which helps in querying Rancher server for the status of stack of service. Sematext Docker Agent It is a modern Docker-aware metrics, events, and log collection agent. It runs as a tiny container on every Docker host and collects logs, metrics, and events for all cluster node and containers. It discovers all containers (one pod might contain multiple containers) including containers for Kubernetes core services, if the core services are deployed in Docker containers. After its deployment, all logs and metrics are immediately available out of the box. Deploying Agents to Nodes Kubernetes provides DeamonSets which ensures pods are added to the cluster. Configuring SemaText Docker Agent It is configured via environment variables. Get a free account at apps.sematext.com, if you don’t have one already. Create an SPM App of type “Docker” to obtain the SPM App Token. SPM App will hold your Kubernetes performance metrics and event. Create a Logsene App to obtain the Logsene App Token. Logsene App will hold your Kubernetes logs. Edit values of LOGSENE_TOKEN and SPM_TOKEN in the DaemonSet definition as shown below. Grab the latest sematext-agent-daemonset.yml (raw plain-text) template (also shown below). Store it somewhere on the disk. Replace the SPM_TOKEN and LOGSENE_TOKEN placeholders with your SPM and Logsene App tokens. Create DaemonSet Object apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: sematext-agent spec: template: metadata: labels: app: sematext-agent spec: selector: {} dnsPolicy: “ClusterFirst” restartPolicy: “Always” containers: – name: sematext-agent image: sematext/sematext-agent-docker:latest imagePullPolicy: “Always” env: – name: SPM_TOKEN value: “REPLACE THIS WITH YOUR SPM TOKEN” – name: LOGSENE_TOKEN value: “REPLACE THIS WITH YOUR LOGSENE TOKEN” – name: KUBERNETES value: “1” volumeMounts: – mountPath: /var/run/docker.sock name: docker-sock – mountPath: /etc/localtime name: localtime volumes: – name: docker-sock hostPath: path: /var/run/docker.sock – name: localtime hostPath: path: /etc/localtime Running the Sematext Agent Docker with kubectl $ kubectl create -f sematext-agent-daemonset.yml daemonset “sematext-agent-daemonset” created Kubernetes Log Kubernetes containers’ logs are not much different from Docker container logs. However, Kubernetes users need to view logs for the deployed pods. Hence, it is very useful to have Kubernetes-specific information available for log search, such as − Kubernetes namespace Kubernetes pod name Kubernetes container name Docker image name Kubernetes UID Using ELK Stack and LogSpout ELK stack includes Elasticsearch, Logstash, and Kibana. To collect and forward the logs to the logging platform, we will use LogSpout (though there are other options such as FluentD). The following code shows how to set up ELK cluster on Kubernetes and create service for ElasticSearch − apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: elk labels: component: elasticsearch spec: type: LoadBalancer selector: component: elasticsearch ports: – name: http port: 9200 protocol: TCP – name: transport port: 9300 protocol: TCP Creating Replication Controller apiVersion: v1 kind: ReplicationController metadata: name: es namespace: elk labels: component: elasticsearch spec: replicas: 1 template: metadata: labels: component: elasticsearch spec: serviceAccount: elasticsearch containers: – name: es securityContext: capabilities: add: – IPC_LOCK image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4 env: – name: KUBERNETES_CA_CERTIFICATE_FILE value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt – name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace – name: “CLUSTER_NAME” value: “myesdb” – name: “DISCOVERY_SERVICE” value: “elasticsearch” – name: NODE_MASTER value: “true” – name: NODE_DATA value: “true” – name: HTTP_ENABLE value: “true” ports: – containerPort: 9200 name: http protocol: TCP – containerPort: 9300 volumeMounts: – mountPath: /data name: storage volumes: – name: storage emptyDir: {} Kibana URL For Kibana, we provide the Elasticsearch URL as an environment variable. – name: KIBANA_ES_URL value: “http://elasticsearch.elk.svc.cluster.local:9200” – name: KUBERNETES_TRUST_CERT value: “true” Kibana UI will be reachable at container port 5601 and corresponding host/Node Port combination. When you begin, there won’t be any data in Kibana (which is expected as you have not pushed any data). Print Page Previous Next Advertisements ”;
Kubernetes – Useful Resources ”; Previous Next The following resources contain additional information on Kubernetes. Please use them to get more in-depth knowledge on this topic. Useful Video Courses Learn Kubernetes with AWS Elastic Kubernetes Services (EKS) 20 Lectures 1 hours Pranjal Srivastava More Detail Kubernetes Fundamentals 34 Lectures 1.5 hours Stone River ELearning More Detail Introduction to Kubernetes using Docker 37 Lectures 4.5 hours Stone River ELearning More Detail Kubernetes Cluster setup in GCP 5 Lectures 30 mins NASERTECHHUB More Detail Kubernetes for the Absolute Beginners – Hands-on Course 61 Lectures 2 hours Rahul Miglani More Detail Docker and Kubernetes for React JS developers 23 Lectures 1.5 hours Pranjal Srivastava More Detail Print Page Previous Next Advertisements ”;
Kubernetes – Images
Kubernetes – Images ”; Previous Next Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, we are only supporting Kubernetes to support Docker images. Each container in a pod has its Docker image running inside it. When we are configuring a pod, the image property in the configuration file has the same syntax as the Docker command does. The configuration file has a field to define the image name, which we are planning to pull from the registry. Following is the common configuration structure which will pull image from Docker registry and deploy in to Kubernetes container. apiVersion: v1 kind: pod metadata: name: Tesing_for_Image_pull ———–> 1 spec: containers: – name: neo4j-server ————————> 2 image: <Name of the Docker image>———-> 3 imagePullPolicy: Always ————->4 command: [“echo”, “SUCCESS”] ——————-> In the above code, we have defined − name: Tesing_for_Image_pull − This name is given to identify and check what is the name of the container that would get created after pulling the images from Docker registry. name: neo4j-server − This is the name given to the container that we are trying to create. Like we have given neo4j-server. image: <Name of the Docker image> − This is the name of the image which we are trying to pull from the Docker or internal registry of images. We need to define a complete registry path along with the image name that we are trying to pull. imagePullPolicy − Always – This image pull policy defines that whenever we run this file to create the container, it will pull the same name again. command: [“echo”, “SUCCESS”] − With this, when we create the container and if everything goes fine, it will display a message when we will access the container. In order to pull the image and create a container, we will run the following command. $ kubectl create –f Tesing_for_Image_pull Once we fetch the log, we will get the output as successful. $ kubectl log Tesing_for_Image_pull The above command will produce an output of success or we will get an output as failure. Note − It is recommended that you try all the commands yourself. Print Page Previous Next Advertisements ”;
Kubernetes – Discussion
Discuss Kubernetes ”; Previous Next Kubernetes is a container management technology developed in Google lab to manage containerized applications in different kind of environments such as physical, virtual, and cloud infrastructure. It is an open source system which helps in creating and managing containerization of application. This tutorial provides an overview of different kind of features and functionalities of Kubernetes and teaches how to manage the containerized infrastructure and application deployment. Print Page Previous Next Advertisements ”;
Kubernetes – Creating an App
Kubernetes – Creating an App ”; Previous Next In order to create an application for Kubernetes deployment, we need to first create the application on the Docker. This can be done in two ways − By downloading From Docker file By Downloading The existing image can be downloaded from Docker hub and can be stored on the local Docker registry. In order to do that, run the Docker pull command. $ docker pull –help Usage: docker pull [OPTIONS] NAME[:TAG|@DIGEST] Pull an image or a repository from the registry -a, –all-tags = false Download all tagged images in the repository –help = false Print usage Following will be the output of the above code. The above screenshot shows a set of images which are stored in our local Docker registry. If we want to build a container from the image which consists of an application to test, we can do it using the Docker run command. $ docker run –i –t unbunt /bin/bash From Docker File In order to create an application from the Docker file, we need to first create a Docker file. Following is an example of Jenkins Docker file. FROM ubuntu:14.04 MAINTAINER [email protected] ENV REFRESHED_AT 2017-01-15 RUN apt-get update -qq && apt-get install -qqy curl RUN curl https://get.docker.io/gpg | apt-key add – RUN echo deb http://get.docker.io/ubuntu docker main > /etc/apt/↩ sources.list.d/docker.list RUN apt-get update -qq && apt-get install -qqy iptables ca-↩ certificates lxc openjdk-6-jdk git-core lxc-docker ENV JENKINS_HOME /opt/jenkins/data ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org RUN mkdir -p $JENKINS_HOME/plugins RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war-↩ stable/latest/jenkins.war RUN for plugin in chucknorris greenballs scm-api git-client git ↩ ws-cleanup ; do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi -L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi ↩ ; done ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh RUN chmod +x /usr/local/bin/dockerjenkins.sh VOLUME /var/lib/docker EXPOSE 8080 ENTRYPOINT [ “/usr/local/bin/dockerjenkins.sh” ] Once the above file is created, save it with the name of Dockerfile and cd to the file path. Then, run the following command. $ sudo docker build -t jamtur01/Jenkins . Once the image is built, we can test if the image is working fine and can be converted to a container. $ docker run –i –t jamtur01/Jenkins /bin/bash Print Page Previous Next Advertisements ”;
Kubernetes – Service
Kubernetes – Service ”; Previous Next A service can be defined as a logical set of pods. It can be defined as an abstraction on the top of the pod which provides a single IP address and DNS name by which pods can be accessed. With Service, it is very easy to manage load balancing configuration. It helps pods to scale very easily. A service is a REST object in Kubernetes whose definition can be posted to Kubernetes apiServer on the Kubernetes master to create a new instance. Service without Selector apiVersion: v1 kind: Service metadata: name: Tutorial_point_service spec: ports: – port: 8080 targetPort: 31999 The above configuration will create a service with the name Tutorial_point_service. Service Config File with Selector apiVersion: v1 kind: Service metadata: name: Tutorial_point_service spec: selector: application: “My Application” ——————-> (Selector) ports: – port: 8080 targetPort: 31999 In this example, we have a selector; so in order to transfer traffic, we need to create an endpoint manually. apiVersion: v1 kind: Endpoints metadata: name: Tutorial_point_service subnets: address: “ip”: “192.168.168.40” ——————-> (Selector) ports: – port: 8080 In the above code, we have created an endpoint which will route the traffic to the endpoint defined as “192.168.168.40:8080”. Multi-Port Service Creation apiVersion: v1 kind: Service metadata: name: Tutorial_point_service spec: selector: application: “My Application” ——————-> (Selector) ClusterIP: 10.3.0.12 ports: -name: http protocol: TCP port: 80 targetPort: 31999 -name:https Protocol: TCP Port: 443 targetPort: 31998 Types of Services ClusterIP − This helps in restricting the service within the cluster. It exposes the service within the defined Kubernetes cluster. spec: type: NodePort ports: – port: 8080 nodePort: 31999 name: NodeportService NodePort − It will expose the service on a static port on the deployed node. A ClusterIP service, to which NodePort service will route, is automatically created. The service can be accessed from outside the cluster using the NodeIP:nodePort. spec: ports: – port: 8080 nodePort: 31999 name: NodeportService clusterIP: 10.20.30.40 Load Balancer − It uses cloud providers’ load balancer. NodePort and ClusterIP services are created automatically to which the external load balancer will route. A full service yaml file with service type as Node Port. Try to create one yourself. apiVersion: v1 kind: Service metadata: name: appname labels: k8s-app: appname spec: type: NodePort ports: – port: 8080 nodePort: 31999 name: omninginx selector: k8s-app: appname component: nginx env: env_name Print Page Previous Next Advertisements ”;
Kubernetes – Kubectl Commands ”; Previous Next Kubectl controls the Kubernetes Cluster. It is one of the key components of Kubernetes which runs on the workstation on any machine when the setup is done. It has the capability to manage the nodes in the cluster. Kubectl commands are used to interact and manage Kubernetes objects and the cluster. In this chapter, we will discuss a few commands used in Kubernetes via kubectl. kubectl annotate − It updates the annotation on a resource. $kubectl annotate [–overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 … KEY_N = VAL_N [–resource-version = version] For example, kubectl annotate pods tomcat description = ”my frontend” kubectl api-versions − It prints the supported versions of API on the cluster. $ kubectl api-version; kubectl apply − It has the capability to configure a resource by file or stdin. $ kubectl apply –f <filename> kubectl attach − This attaches things to the running container. $ kubectl attach <pod> –c <container> $ kubectl attach 123456-7890 -c tomcat-conatiner kubectl autoscale − This is used to auto scale pods which are defined such as Deployment, replica set, Replication Controller. $ kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [–min = MINPODS] — max = MAXPODS [–cpu-percent = CPU] [flags] $ kubectl autoscale deployment foo –min = 2 –max = 10 kubectl cluster-info − It displays the cluster Info. $ kubectl cluster-info kubectl cluster-info dump − It dumps relevant information regarding cluster for debugging and diagnosis. $ kubectl cluster-info dump $ kubectl cluster-info dump –output-directory = /path/to/cluster-state kubectl config − Modifies the kubeconfig file. $ kubectl config <SUBCOMMAD> $ kubectl config –-kubeconfig <String of File name> kubectl config current-context − It displays the current context. $ kubectl config current-context #deploys the current context kubectl config delete-cluster − Deletes the specified cluster from kubeconfig. $ kubectl config delete-cluster <Cluster Name> kubectl config delete-context − Deletes a specified context from kubeconfig. $ kubectl config delete-context <Context Name> kubectl config get-clusters − Displays cluster defined in the kubeconfig. $ kubectl config get-cluster $ kubectl config get-cluster <Cluser Name> kubectl config get-contexts − Describes one or many contexts. $ kubectl config get-context <Context Name> kubectl config set-cluster − Sets the cluster entry in Kubernetes. $ kubectl config set-cluster NAME [–server = server] [–certificateauthority = path/to/certificate/authority] [–insecure-skip-tls-verify = true] kubectl config set-context − Sets a context entry in kubernetes entrypoint. $ kubectl config set-context NAME [–cluster = cluster_nickname] [– user = user_nickname] [–namespace = namespace] $ kubectl config set-context prod –user = vipin-mishra kubectl config set-credentials − Sets a user entry in kubeconfig. $ kubectl config set-credentials cluster-admin –username = vipin — password = uXFGweU9l35qcif kubectl config set − Sets an individual value in kubeconfig file. $ kubectl config set PROPERTY_NAME PROPERTY_VALUE kubectl config unset − It unsets a specific component in kubectl. $ kubectl config unset PROPERTY_NAME PROPERTY_VALUE kubectl config use-context − Sets the current context in kubectl file. $ kubectl config use-context <Context Name> kubectl config view $ kubectl config view $ kubectl config view –o jsonpath=”{.users[?(@.name == “e2e”)].user.password}” kubectl cp − Copy files and directories to and from containers. $ kubectl cp <Files from source> <Files to Destinatiion> $ kubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> kubectl create − To create resource by filename of or stdin. To do this, JSON or YAML formats are accepted. $ kubectl create –f <File Name> $ cat <file name> | kubectl create –f – In the same way, we can create multiple things as listed using the create command along with kubectl. deployment namespace quota secret docker-registry secret secret generic secret tls serviceaccount service clusterip service loadbalancer service nodeport kubectl delete − Deletes resources by file name, stdin, resource and names. $ kubectl delete –f ([-f FILENAME] | TYPE [(NAME | -l label | –all)]) kubectl describe − Describes any particular resource in kubernetes. Shows details of resource or a group of resources. $ kubectl describe <type> <type name> $ kubectl describe pod tomcat kubectl drain − This is used to drain a node for maintenance purpose. It prepares the node for maintenance. This will mark the node as unavailable so that it should not be assigned with a new container which will be created. $ kubectl drain tomcat –force kubectl edit − It is used to end the resources on the server. This allows to directly edit a resource which one can receive via the command line tool. $ kubectl edit <Resource/Name | File Name) Ex. $ kubectl edit rc/tomcat kubectl exec − This helps to execute a command in the container. $ kubectl exec POD <-c CONTAINER > — COMMAND < args…> $ kubectl exec tomcat 123-5-456 date kubectl expose − This is used to expose the Kubernetes objects such as pod, replication controller, and service as a new Kubernetes service. This has the capability to expose it via a running container or from a yaml file. $ kubectl expose (-f FILENAME | TYPE NAME) [–port=port] [–protocol = TCP|UDP] [–target-port = number-or-name] [–name = name] [–external-ip = external-ip-ofservice] [–type = type] $ kubectl expose rc tomcat –-port=80 –target-port = 30000 $ kubectl expose –f tomcat.yaml –port = 80 –target-port = kubectl get − This command is capable of fetching data on the cluster about the Kubernetes resources. $ kubectl get [(-o|–output=)json|yaml|wide|custom-columns=…|custom-columnsfile=…| go-template=…|go-template-file=…|jsonpath=…|jsonpath-file=…] (TYPE [NAME | -l label] | TYPE/NAME …) [flags] For example, $ kubectl get pod <pod name> $ kubectl get service <Service name> kubectl logs − They are used to get the logs of the container in a pod. Printing the logs can be defining the container name in the pod. If the POD has only one container there is no need to define its name. $ kubectl logs [-f] [-p] POD [-c CONTAINER] Example $ kubectl logs tomcat. $ kubectl logs –p –c tomcat.8 kubectl port-forward − They are used to forward one or more local port to pods. $ kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT […[LOCAL_PORT_N:]REMOTE_PORT_N] $ kubectl port-forward tomcat 3000 4000 $ kubectl port-forward tomcat 3000:5000 kubectl replace − Capable
Kubernetes – Jobs
Kubernetes – Jobs ”; Previous Next The main function of a job is to create one or more pod and tracks about the success of pods. They ensure that the specified number of pods are completed successfully. When a specified number of successful run of pods is completed, then the job is considered complete. Creating a Job Use the following command to create a job − apiVersion: v1 kind: Job ————————> 1 metadata: name: py spec: template: metadata name: py ——-> 2 spec: containers: – name: py ————————> 3 image: python———-> 4 command: [“python”, “SUCCESS”] restartPocliy: Never ——–> 5 In the above code, we have defined − kind: Job → We have defined the kind as Job which will tell kubectl that the yaml file being used is to create a job type pod. Name:py → This is the name of the template that we are using and the spec defines the template. name: py → we have given a name as py under container spec which helps to identify the Pod which is going to be created out of it. Image: python → the image which we are going to pull to create the container which will run inside the pod. restartPolicy: Never →This condition of image restart is given as never which means that if the container is killed or if it is false, then it will not restart itself. We will create the job using the following command with yaml which is saved with the name py.yaml. $ kubectl create –f py.yaml The above command will create a job. If you want to check the status of a job, use the following command. $ kubectl describe jobs/py The above command will create a job. If you want to check the status of a job, use the following command. Scheduled Job Scheduled job in Kubernetes uses Cronetes, which takes Kubernetes job and launches them in Kubernetes cluster. Scheduling a job will run a pod at a specified point of time. A parodic job is created for it which invokes itself automatically. Note − The feature of a scheduled job is supported by version 1.4 and the betch/v2alpha 1 API is turned on by passing the –runtime-config=batch/v2alpha1 while bringing up the API server. We will use the same yaml which we used to create the job and make it a scheduled job. apiVersion: v1 kind: Job metadata: name: py spec: schedule: h/30 * * * * ? ——————-> 1 template: metadata name: py spec: containers: – name: py image: python args: /bin/sh ——-> 2 -c ps –eaf ————> 3 restartPocliy: OnFailure In the above code, we have defined − schedule: h/30 * * * * ? → To schedule the job to run in every 30 minutes. /bin/sh: This will enter in the container with /bin/sh ps –eaf → Will run ps -eaf command on the machine and list all the running process inside a container. This scheduled job concept is useful when we are trying to build and run a set of tasks at a specified point of time and then complete the process. Print Page Previous Next Advertisements ”;
Kubernetes – Volumes
Kubernetes – Volumes ”; Previous Next In Kubernetes, a volume can be thought of as a directory which is accessible to the containers in a pod. We have different types of volumes in Kubernetes and the type defines how the volume is created and its content. The concept of volume was present with the Docker, however the only issue was that the volume was very much limited to a particular pod. As soon as the life of a pod ended, the volume was also lost. On the other hand, the volumes that are created through Kubernetes is not limited to any container. It supports any or all the containers deployed inside the pod of Kubernetes. A key advantage of Kubernetes volume is, it supports different kind of storage wherein the pod can use multiple of them at the same time. Types of Kubernetes Volume Here is a list of some popular Kubernetes Volumes − emptyDir − It is a type of volume that is created when a Pod is first assigned to a Node. It remains active as long as the Pod is running on that node. The volume is initially empty and the containers in the pod can read and write the files in the emptyDir volume. Once the Pod is removed from the node, the data in the emptyDir is erased. hostPath − This type of volume mounts a file or directory from the host node’s filesystem into your pod. gcePersistentDisk − This type of volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod. The data in a gcePersistentDisk remains intact when the Pod is removed from the node. awsElasticBlockStore − This type of volume mounts an Amazon Web Services (AWS) Elastic Block Store into your Pod. Just like gcePersistentDisk, the data in an awsElasticBlockStore remains intact when the Pod is removed from the node. nfs − An nfs volume allows an existing NFS (Network File System) to be mounted into your pod. The data in an nfs volume is not erased when the Pod is removed from the node. The volume is only unmounted. iscsi − An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your pod. flocker − It is an open-source clustered container data volume manager. It is used for managing data volumes. A flocker volume allows a Flocker dataset to be mounted into a pod. If the dataset does not exist in Flocker, then you first need to create it by using the Flocker API. glusterfs − Glusterfs is an open-source networked filesystem. A glusterfs volume allows a glusterfs volume to be mounted into your pod. rbd − RBD stands for Rados Block Device. An rbd volume allows a Rados Block Device volume to be mounted into your pod. Data remains preserved after the Pod is removed from the node. cephfs − A cephfs volume allows an existing CephFS volume to be mounted into your pod. Data remains intact after the Pod is removed from the node. gitRepo − A gitRepo volume mounts an empty directory and clones a git repository into it for your pod to use. secret − A secret volume is used to pass sensitive information, such as passwords, to pods. persistentVolumeClaim − A persistentVolumeClaim volume is used to mount a PersistentVolume into a pod. PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment. downwardAPI − A downwardAPI volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files. azureDiskVolume − An AzureDiskVolume is used to mount a Microsoft Azure Data Disk into a Pod. Persistent Volume and Persistent Volume Claim Persistent Volume (PV) − It’s a piece of network storage that has been provisioned by the administrator. It’s a resource in the cluster which is independent of any individual pod that uses the PV. Persistent Volume Claim (PVC) − The storage requested by Kubernetes for its pods is known as PVC. The user does not need to know the underlying provisioning. The claims must be created in the same namespace where the pod is created. Creating Persistent Volume kind: PersistentVolume ———> 1 apiVersion: v1 metadata: name: pv0001 ——————> 2 labels: type: local spec: capacity: ———————–> 3 storage: 10Gi ———————-> 4 accessModes: – ReadWriteOnce ——————-> 5 hostPath: path: “/tmp/data01″ ————————–> 6 In the above code, we have defined − kind: PersistentVolume → We have defined the kind as PersistentVolume which tells kubernetes that the yaml file being used is to create the Persistent Volume. name: pv0001 → Name of PersistentVolume that we are creating. capacity: → This spec will define the capacity of PV that we are trying to create. storage: 10Gi → This tells the underlying infrastructure that we are trying to claim 10Gi space on the defined path. ReadWriteOnce → This tells the access rights of the volume that we are creating. path: “/tmp/data01” → This definition tells the machine that we are trying to create volume under this path on the underlying infrastructure. Creating PV $ kubectl create –f local-01.yaml persistentvolume “pv0001” created Checking PV $ kubectl get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 10Gi RWO Available 14s Describing PV $ kubectl describe pv pv0001 Creating Persistent Volume Claim kind: PersistentVolumeClaim ————–> 1 apiVersion: v1 metadata: name: myclaim-1 ——————–> 2 spec: accessModes: – ReadWriteOnce ————————> 3 resources: requests: storage: 3Gi ———————> 4 In the above code, we have defined − kind: PersistentVolumeClaim → It instructs the underlying infrastructure that we are trying to claim a specified amount of space. name: myclaim-1 → Name of the claim that we are trying to create. ReadWriteOnce → This specifies the mode of the claim that we are trying to create. storage: 3Gi → This will tell kubernetes about the amount of space we are trying to claim. Creating PVC $ kubectl create –f