Kubernetes – Deployments ”; Previous Next Deployments are upgraded and higher version of replication controller. They manage the deployment of replica sets which is also an upgraded version of the replication controller. They have the capability to update the replica set and are also capable of rolling back to the previous version. They provide many updated features of matchLabels and selectors. We have got a new controller in the Kubernetes master called the deployment controller which makes it happen. It has the capability to change the deployment midway. Changing the Deployment Updating − The user can update the ongoing deployment before it is completed. In this, the existing deployment will be settled and new deployment will be created. Deleting − The user can pause/cancel the deployment by deleting it before it is completed. Recreating the same deployment will resume it. Rollback − We can roll back the deployment or the deployment in progress. The user can create or update the deployment by using DeploymentSpec.PodTemplateSpec = oldRC.PodTemplateSpec. Deployment Strategies Deployment strategies help in defining how the new RC should replace the existing RC. Recreate − This feature will kill all the existing RC and then bring up the new ones. This results in quick deployment however it will result in downtime when the old pods are down and the new pods have not come up. Rolling Update − This feature gradually brings down the old RC and brings up the new one. This results in slow deployment, however there is no deployment. At all times, few old pods and few new pods are available in this process. The configuration file of Deployment looks like this. apiVersion: extensions/v1beta1 ———————>1 kind: Deployment ————————–> 2 metadata: name: Tomcat-ReplicaSet spec: replicas: 3 template: metadata: lables: app: Tomcat-ReplicaSet tier: Backend spec: containers: – name: Tomcatimage: tomcat: 8.0 ports: – containerPort: 7474 In the above code, the only thing which is different from the replica set is we have defined the kind as deployment. Create Deployment $ kubectl create –f Deployment.yaml -–record deployment “Deployment” created Successfully. Fetch the Deployment $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVILABLE AGE Deployment 3 3 3 3 20s Check the Status of Deployment $ kubectl rollout status deployment/Deployment Updating the Deployment $ kubectl set image deployment/Deployment tomcat=tomcat:6.0 Rolling Back to Previous Deployment $ kubectl rollout undo deployment/Deployment –to-revision=2 Print Page Previous Next Advertisements ”;
Category: kubernetes
Kubernetes – Replication Controller ”; Previous Next Replication Controller is one of the key features of Kubernetes, which is responsible for managing the pod lifecycle. It is responsible for making sure that the specified number of pod replicas are running at any point of time. It is used in time when one wants to make sure that the specified number of pod or at least one pod is running. It has the capability to bring up or down the specified no of pod. It is a best practice to use the replication controller to manage the pod life cycle rather than creating a pod again and again. apiVersion: v1 kind: ReplicationController ————————–> 1 metadata: name: Tomcat-ReplicationController ————————–> 2 spec: replicas: 3 ————————> 3 template: metadata: name: Tomcat-ReplicationController labels: app: App component: neo4j spec: containers: – name: Tomcat- ———————–> 4 image: tomcat: 8.0 ports: – containerPort: 7474 ————————> 5 Setup Details Kind: ReplicationController → In the above code, we have defined the kind as replication controller which tells the kubectl that the yaml file is going to be used for creating the replication controller. name: Tomcat-ReplicationController → This helps in identifying the name with which the replication controller will be created. If we run the kubctl, get rc < Tomcat-ReplicationController > it will show the replication controller details. replicas: 3 → This helps the replication controller to understand that it needs to maintain three replicas of a pod at any point of time in the pod lifecycle. name: Tomcat → In the spec section, we have defined the name as tomcat which will tell the replication controller that the container present inside the pods is tomcat. containerPort: 7474 → It helps in making sure that all the nodes in the cluster where the pod is running the container inside the pod will be exposed on the same port 7474. Here, the Kubernetes service is working as a load balancer for three tomcat replicas. Print Page Previous Next Advertisements ”;
Kubernetes – Setup
Kubernetes – Setup ”; Previous Next It is important to set up the Virtual Datacenter (vDC) before setting up Kubernetes. This can be considered as a set of machines where they can communicate with each other via the network. For hands-on approach, you can set up vDC on PROFITBRICKS if you do not have a physical or cloud infrastructure set up. Once the IaaS setup on any cloud is complete, you need to configure the Master and the Node. Note − The setup is shown for Ubuntu machines. The same can be set up on other Linux machines as well. Prerequisites Installing Docker − Docker is required on all the instances of Kubernetes. Following are the steps to install the Docker. Step 1 − Log on to the machine with the root user account. Step 2 − Update the package information. Make sure that the apt package is working. Step 3 − Run the following commands. $ sudo apt-get update $ sudo apt-get install apt-transport-https ca-certificates Step 4 − Add the new GPG key. $ sudo apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net:80 –recv-keys 58118E89F3A912897C070ADBF76221572C52609D $ echo “deb https://apt.dockerproject.org/repo ubuntu-trusty main” | sudo tee /etc/apt/sources.list.d/docker.list Step 5 − Update the API package image. $ sudo apt-get update Once all the above tasks are complete, you can start with the actual installation of the Docker engine. However, before this you need to verify that the kernel version you are using is correct. Install Docker Engine Run the following commands to install the Docker engine. Step 1 − Logon to the machine. Step 2 − Update the package index. $ sudo apt-get update Step 3 − Install the Docker Engine using the following command. $ sudo apt-get install docker-engine Step 4 − Start the Docker daemon. $ sudo apt-get install docker-engine Step 5 − To very if the Docker is installed, use the following command. $ sudo docker run hello-world Install etcd 2.0 This needs to be installed on Kubernetes Master Machine. In order to install it, run the following commands. $ curl -L https://github.com/coreos/etcd/releases/download/v2.0.0/etcd -v2.0.0-linux-amd64.tar.gz -o etcd-v2.0.0-linux-amd64.tar.gz ->1 $ tar xzvf etcd-v2.0.0-linux-amd64.tar.gz ——>2 $ cd etcd-v2.0.0-linux-amd64 ————>3 $ mkdir /opt/bin ————->4 $ cp etcd* /opt/bin ———–>5 In the above set of command − First, we download the etcd. Save this with specified name. Then, we have to un-tar the tar package. We make a dir. inside the /opt named bin. Copy the extracted file to the target location. Now we are ready to build Kubernetes. We need to install Kubernetes on all the machines on the cluster. $ git clone https://github.com/GoogleCloudPlatform/kubernetes.git $ cd kubernetes $ make release The above command will create a _output dir in the root of the kubernetes folder. Next, we can extract the directory into any of the directory of our choice /opt/bin, etc. Next, comes the networking part wherein we need to actually start with the setup of Kubernetes master and node. In order to do this, we will make an entry in the host file which can be done on the node machine. $ echo “<IP address of master machine> kube-master < IP address of Node Machine>” >> /etc/hosts Following will be the output of the above command. Now, we will start with the actual configuration on Kubernetes Master. First, we will start copying all the configuration files to their correct location. $ cp <Current dir. location>/kube-apiserver /opt/bin/ $ cp <Current dir. location>/kube-controller-manager /opt/bin/ $ cp <Current dir. location>/kube-kube-scheduler /opt/bin/ $ cp <Current dir. location>/kubecfg /opt/bin/ $ cp <Current dir. location>/kubectl /opt/bin/ $ cp <Current dir. location>/kubernetes /opt/bin/ The above command will copy all the configuration files to the required location. Now we will come back to the same directory where we have built the Kubernetes folder. $ cp kubernetes/cluster/ubuntu/init_conf/kube-apiserver.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/init_conf/kube-controller-manager.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/init_conf/kube-kube-scheduler.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kube-apiserver /etc/init.d/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kube-controller-manager /etc/init.d/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kube-kube-scheduler /etc/init.d/ $ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/ $ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/ $ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/ The next step is to update the copied configuration file under /etc. dir. Configure etcd on master using the following command. $ ETCD_OPTS = “-listen-client-urls = http://kube-master:4001” Configure kube-apiserver For this on the master, we need to edit the /etc/default/kube-apiserver file which we copied earlier. $ KUBE_APISERVER_OPTS = “–address = 0.0.0.0 –port = 8080 –etcd_servers = <The path that is configured in ETCD_OPTS> –portal_net = 11.1.1.0/24 –allow_privileged = false –kubelet_port = < Port you want to configure> –v = 0” Configure the kube Controller Manager We need to add the following content in /etc/default/kube-controller-manager. $ KUBE_CONTROLLER_MANAGER_OPTS = “–address = 0.0.0.0 –master = 127.0.0.1:8080 –machines = kube-minion —–> #this is the kubernatics node –v = 0 Next, configure the kube scheduler in the corresponding file. $ KUBE_SCHEDULER_OPTS = “–address = 0.0.0.0 –master = 127.0.0.1:8080 –v = 0” Once all the above tasks are complete, we are good to go ahead by bring up the Kubernetes Master. In order to do this, we will restart the Docker. $ service docker restart Kubernetes Node Configuration Kubernetes node will run two services the kubelet and the kube-proxy. Before moving ahead, we need to copy the binaries we downloaded to their required folders where we want to configure the kubernetes node. Use the same method of copying the files that we did for kubernetes master. As it will only run the kubelet and the kube-proxy, we will configure them. $ cp <Path of the extracted file>/kubelet /opt/bin/ $ cp <Path of the extracted file>/kube-proxy /opt/bin/ $ cp <Path of the extracted file>/kubecfg /opt/bin/ $ cp <Path of the extracted file>/kubectl /opt/bin/ $ cp <Path of the extracted file>/kubernetes /opt/bin/ Now, we will copy the content to the appropriate dir. $ cp kubernetes/cluster/ubuntu/init_conf/kubelet.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/init_conf/kube-proxy.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kubelet /etc/init.d/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kube-proxy /etc/init.d/ $ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/ $ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/ We will configure the kubelet and kube-proxy conf files. We will configure the /etc/init/kubelet.conf. $ KUBELET_OPTS = “–address = 0.0.0.0 –port = 10250 –hostname_override = kube-minion –etcd_servers = http://kube-master:4001 –enable_server = true –v
Kubernetes – Home
Kubernetes Tutorial PDF Version Quick Guide Resources Job Search Discussion Kubernetes is a container management technology developed in Google lab to manage containerized applications in different kind of environments such as physical, virtual, and cloud infrastructure. It is an open source system which helps in creating and managing containerization of application. This tutorial provides an overview of different kind of features and functionalities of Kubernetes and teaches how to manage the containerized infrastructure and application deployment. Audience This tutorial has been prepared for those who want to understand the containerized infrastructure and deployment of application on containers. This tutorial will help in understanding the concepts of container management using Kubernetes. Prerequisites We assume anyone who wants to understand Kubernetes should have an understating of how the Docker works, how the Docker images are created, and how they work as a standalone unit. To reach to an advanced configuration in Kubernetes one should understand basic networking and how the protocol communication works. Print Page Previous Next Advertisements ”;