OpenShift – Build Automation ”; Previous Next In OpenShift, we have multiple methods of automating the build pipeline. In order to do that we need to create a BuildConfig resource to describe the build flow. The flow in BuildConfig can be compared with the job definition in Jenkins job definition. While creating the build flow, we have to choose the build strategy. BuildConfig File In OpenShift, BuildConfig is a rest object used to connect to API and then create a new instance. kind: “BuildConfig” apiVersion: “v1” metadata: name: “<Name of build config file>” spec: runPolicy: “Serial” triggers: – type: “GitHub” github: secret: “<Secrete file name>” – type: “Generic” generic: secret: “secret101” – type: “ImageChange” source: type: “<Source of code>” git: uri: “https://github.com/openshift/openshift-hello-world” dockerfile: “FROM openshift/openshift-22-centos7nUSER example” strategy: type: “Source” sourceStrategy: from: kind: “ImageStreamTag” name: “openshift-20-centos7:latest” output: to: kind: “ImageStreamTag” name: “origin-openshift-sample:latest” postCommit: script: “bundle exec rake test” In OpenShift, there are four types of build strategies. Source-to-image strategy Docker strategy Custom strategy Pipeline strategy Source-to-image Strategy Allows creating container images starting from the source code. In this flow, the actual code gets downloaded first in the container and then gets compiled inside it. The compiled code gets deployed inside the same container and the image is built from that code. strategy: type: “Source” sourceStrategy: from: kind: “ImageStreamTag” name: “builder-image:latest” forcePull: true There are multiple strategy policies. Forcepull Incremental Builds External Builds Docker Strategy In this flow, OpenShift uses Dockerfile to build the image and then upload the created images to the Docker registry. strategy: type: Docker dockerStrategy: from: kind: “ImageStreamTag” name: “ubuntu:latest” Docker file option can be used in multiple locations starting from file path, no cache, and force pull. From Image Dockerfile path No cache Force pull Custom Strategy This is one of the different kinds of build strategy, wherein there is no such compulsion that the output of the build is going to be an image. It can be compared to a free style job of Jenkins. With this, we can create Jar, rpm, and other packages. strategy: type: “Custom” customStrategy: from: kind: “DockerImage” name: “openshift/sti-image-builder” It consists of multiple build strategies. Expose Docker socket Secrets Force pull Pipeline Strategy Pipeline strategy is used to create custom build pipelines. This is basically used to implement the workflow in the pipeline. This build flow uses custom build pipeline flow using Groovy DSL language. OpenShift will create a pipeline job in Jenkins and execute it. This pipeline flow can also be used in Jenkins. In this strategy, we use Jenkinsfile and append that in the buildconfig definition. Strategy: type: “JenkinsPipeline” jenkinsPipelineStrategy: jenkinsfile: “node(”agent”) {nstage ”build”nopenshiftBuild(buildConfig: ”OpenShift-build”, showBuildLogs: ”true”)nstage ”deploy”nopenshiftDeploy(deploymentConfig: ”backend”)n}” Using build pipeline kind: “BuildConfig” apiVersion: “v1” metadata: name: “test-pipeline” spec: source: type: “Git” git: uri: “https://github.com/openshift/openshift-hello-world” strategy: type: “JenkinsPipeline” jenkinsPipelineStrategy: jenkinsfilePath: <file path repository> Print Page Previous Next Advertisements ”;
Category: openshift
OpenShift – Environment Setup ”; Previous Next In this chapter, we will learn about the environment setup of OpenShift. System Requirement In order to set up enterprise OpenShift, one needs to have an active Red Hat account. As OpenShift works on Kubernetes master and node architecture, we need to set up both of them on separate machines, wherein one machine acts as a master and other works on the node. In order to set up both, there are minimum system requirements. Master Machine Configuration Following are the minimum system requirements for master machine configuration. A base machine hosted either on physical, virtual, or on any of the cloud environment. At least Linux 7 with the required packages on that instance. 2 CPU core. At least 8 GB RAM. 30 GB of internal hard disk memory. Node Machine Configuration Physical or virtual base image as given for the master machine. At least Linux 7 on the machine. Docker installed with not below than 1.6 version. 1 CPU core. 8 GB RAM. 15 GB hard disk for hosting images and 15 GB for storing images. Step by Step Guide to OpenShift Setup In the following description, we are going to set up OpenShift lab environment, which can be later extended to a bigger cluster. As OpenShift requires master and node setup, we would need at least two machines hosted on either cloud, physical, or virtual machines. Step 1 − First install Linux on both the machines, where the Linux 7 should be the least version. This can be done using the following commands if one has an active Red Hat subscription. # subscription-manager repos –disable = “*” # subscription-manager repos –enable = “rhel-7-server-rpms” # subscription-manager repos –enable = “rhel-7-server-extras-rpms” # subscription-manager repos –enable = “rhel-7-server-optional-rpms” # subscription-manager repos –enable = “rhel-7-server-ose-3.0-rpms” # yum install wget git net-tools bind-utils iptables-services bridge-utils # yum install wget git net-tools bind-utils iptables-services bridge-utils # yum install python-virtualenv # yum install gcc # yum install httpd-tools # yum install docker # yum update Once we have all the above base packages installed in both of the machines, the next step would be to set up Docker on the respective machines. Step 2 − Configure Docker so that it should allow insecure communication on the local network only. For this, edit the Docker file inside /etc/sysconfig. If the file is not present then you need to create it manually. # vi /etc/sysconfig/docker OPTIONS = –selinux-enabled –insecure-registry 192.168.122.0/24 After configuring the Docker on the master machine, we need to set up a password-less communication between both the machines. For this, we will use public and private key authentication. Step 3 − Generate keys on the master machine and then copy the id_rsa.pub key to the authorized key file of the node machine, which can be done using the following command. # ssh-keygen # ssh-copy-id -i .ssh/id_rsa.pub [email protected] Once you have all of the above setup in place, next is to set up OpenShift version 3 on the master machine. Step 4 − From the master machine, run the following curl command. # sh https://install.openshift.com/ose) The above command will put the setup in place for OSV3. The next step would be to configure OpenShift V3 on the machine. If you cannot download from the Internet directly, then it could be downloaded from https://install.openshift.com/portable/oo-install-ose.tgz as a tar package from which the installer can run on the local master machine. Once we have the setup ready, then we need to start with the actual configuration of OSV3 on the machines. This setup is very specific to test the environment for actual production, we have LDAP and other things in place. Step 5 − On the master machine, configure the following code located under /etc/openshift/master/master-config.yaml # vi /etc/openshift/master/master-config.yaml identityProviders: – name: my_htpasswd_provider challenge: true login: true provider: apiVersion: v1 kind: HTPasswdPasswordIdentityProvider file: /root/users.htpasswd routingConfig: subdomain: testing.com Next, create a standard user for default administration. # htpasswd -c /root/users.htpasswd admin Step 6 − As OpenShift uses Docker registry for configuring images, we need to configure Docker registry. This is used for creating and storing the Docker images after build. Create a directory on the OpenShift node machine using the following command. # mkdir /images Next, login to the master machine using the default admin credentials, which gets created while setting up the registry. # oc login Username: system:admin Switch to the default created project. # oc project default Step 7 − Create a Docker Registry. #echo ”{“kind”:”ServiceAccount”,”apiVersion”:”v1″,”metadata”:{“name”:”registry”}}” | oc create -f – Edit the user privileges. #oc edit scc privileged users: – system:serviceaccount:openshift-infra:build-controller – system:serviceaccount:default:registry Create and edit the image registry. #oadm registry –service-account = registry — config = /etc/openshift/master/admin.kubeconfig — credentials = /etc/openshift/master/openshift-registry.kubeconfig — images = ”registry.access.redhat.com/openshift3/ose-${component}:${version}” — mount-host = /images Step 8 − Create a default routing. By default, OpenShift uses OpenVswitch as software network. Use the following command to create a default routing. This is used for load balancing and proxy routing. The router is similar to the Docker registry and also runs in a registry. # echo ”{“kind”:”ServiceAccount”,”apiVersion”:”v1″,”metadata”:{“name”:”router”}}” | oc create -f – Next, edit the privileges of the user. #oc edit scc privileged users: – system:serviceaccount:openshift-infra:build-controller – system:serviceaccount:default:registry – system:serviceaccount:default:router #oadm router router-1 –replicas = 1 — credentials = ”/etc/openshift/master/openshift-router.kubeconfig” — images = ”registry.access.redhat.com/openshift3/ose-${component}:${version}” Step 9 − Configure the DNS. In order to handle URL request, OpenShift needs a working DNS environment. This DNS configuration is required to create a wild card, which is required to create DNS wild card that points to a router. # yum install bind-utils bind # systemctl start named # systemctl enable named vi /etc/named.conf options {listen-on port 53 { 10.123.55.111; }; forwarders { 10.38.55.13; ; }; zone “lab.com” IN { type master; file “/var/named/dynamic/test.com.zone”; allow-update { none; }; }; Step 10 − The final step would be to set up github server on OpenShift V3 master machine, which is optional. This can be done easily using the following sequence of commands. #yum install curl openssh-server #systemctl enable sshd #
OpenShift – Clusters
OpenShift – Clusters ”; Previous Next OpenShift uses two installation methods of setting up OpenShift cluster. Quick installation method Advanced configuration method Setting Up Cluster Quick Installation Method This method is used for running a quick unattained cluster setup configuration. In order to use this method, we need to first install the installer. This can be done by running the following command. Interactive method $ atomic-openshift-installer install This is useful when one wishes to run an interactive setup. Unattended installation method This method is used when one wishes to set up an unattended way of installation method, wherein the user can define a configuration yaml file and place it under ~/.config/openshift/ with the name of installer.cfg.yml. Then, the following command can be run to install the –u tag. $ atomic-openshift-installer –u install By default it uses the config file located under ~/.config/openshift/. Ansible on the other hand is used as a backup of installation. version: v2 variant: openshift-enterprise variant_version: 3.1 ansible_log_path: /tmp/ansible.log deployment: ansible_ssh_user: root hosts: – ip: 172.10.10.1 hostname: vklnld908.int.example.com public_ip: 24.222.0.1 public_hostname: master.example.com roles: – master – node containerized: true connect_to: 24.222.0.1 – ip: 172.10.10.2 hostname: vklnld1446.int.example.com public_ip: 24.222.0.2 public_hostname: node1.example.com roles: – node connect_to: 10.0.0.2 – ip: 172.10.10.3 hostname: vklnld1447.int.example.com public_ip: 10..22.2.3 public_hostname: node2.example.com roles: – node connect_to: 10.0.0.3 roles: master: <variable_name1>: “<value1>” <variable_name2>: “<value2>” node: <variable_name1>: “<value1>” Here, we have role-specific variable, which can be defined if one wishes to set up some specific variable. Once done, we can verify the installation using the following command. $ oc get nodes NAME STATUS AGE master.example.com Ready 10d node1.example.com Ready 10d node2.example.com Ready 10d Advanced Installation Advanced installation is completely based on Ansible configuration wherein the complete host configuration and variables definition regarding master and node configuration is present. This contains all the details regarding the configuration. Once we have the setup and the playbook is ready, we can simply run the following command to setup the cluster. $ ansible-playbook -i inventry/hosts ~/openshift-ansible/playbooks/byo/config.yml Adding Hosts to a Cluster We can add a host to the cluster using − Quick installer tool Advanced configuration method Quick installation tool works in both interactive and non-interactive mode. Use the following command. $ atomic-openshift-installer -u -c </path/to/file> scaleup Format of scaling the application configuration file looks can be used for adding both master as well as node. Advanced Configuration Method In this method, we update the host file of Ansible and then add a new node or server details in this file. Configuration file looks like the following. [OSEv3:children] masters nodes new_nodes new_master In the same Ansible hosts file, add variable details regarding the new node as shown below. [new_nodes] vklnld1448.int.example.com openshift_node_labels = “{”region”: ”primary”, ”zone”: ”east”}” Finally, using the updated host file, run the new configuration and invoke the configuration file to get the setup done using the following command. $ ansible-playbook -i /inventory/hosts /usr/share/ansible/openshift-ansible/playbooks/test/openshift-node/scaleup.yml Managing Cluster Logs OpenShift cluster log is nothing but the logs which are getting generated from the master and the node machines of cluster. These can manage any kind of log, starting from server log, master log, container log, pod log, etc. There are multiple technologies and applications present for container log management. Few of the tools are as listed, which can be implemented for log management. Fluentd ELK Kabna Nagios Splunk ELK stack − This stack is useful while trying to collect the logs from all the nodes and present them in a systematic format. ELK stack is mainly divided in three major categories. ElasticSearch − Mainly resposible for collecting information from all the containers and putting it into a central location. Fluentd − Used for feeding collected logs to elasticsearch container engine. Kibana − A graphical interface used for presenting the collected data as a useful information in a graphical interface. One key point to note is, when this system is deployed on the cluster it starts collecting logs from all the nodes. Log Diagnostics OpenShift has an inbuilt oc adm dignostics command with OC that can be used for analyzing multiple error situations. This tool can be used from the master as a cluster administrator. This utility is very helpful is troubleshooting and dignosing known problems. This runs on the master client and nodes. If run without any agruments or flags, it will look for configuration files of the client, server, and node machnies, and use them for diagnostics. One can run the diagnostics individually by passing the following arguments − AggregatedLogging AnalyzeLogs ClusterRegistry ClusterRoleBindings ClusterRoles ClusterRouter ConfigContexts DiagnosticPod MasterConfigCheck MasterNode MetricsApiProxy NetworkCheck NodeConfigCheck NodeDefinitions ServiceExternalIPs UnitStatus One can simply run them with the following command. $ oc adm diagnostics <DiagnosticName> Upgrading a Cluster Upgradation of the cluster involves upgrading multiple things within the cluster and getiing the cluster updated with new components and upgrdes. This involves − Upgradation of master components Upgradation of node components Upgradation of policies Upgradation of routes Upgradation of image stream In order to perform all these upgrades, we need to first get quick installers or utils in place. For that we need to update the following utilities − atomic-openshift-utils atomic-openshift-excluder atomic-openshift-docker-excluder etcd package Before starting the upgrade, we need to backup etcd on the master machine, which can be done using the following commands. $ ETCD_DATA_DIR = /var/lib/origin/openshift.local.etcd $ etcdctl backup –data-dir $ETCD_DATA_DIR –backup-dir $ETCD_DATA_DIR.bak.<date> Upgradation of Master Components In OpenShift master, we start the upgrade by updating the etcd file and then moving on to Docker. Finally, we run the automated executer to get the cluster into the required position. However, before starting the upgrade we need to first activate the atomic openshift packages on each of the masters. This can be done using the following commands. Step 1 − Remove atomic-openshift packages $ atomic-openshift-excluder unexclude Step 2 − Upgrade etcd on all the masters. $ yum update etcd Step 3 − Restart the service of etcd and check if it has started successfully. $ systemctl restart etcd $ journalctl -r -u etcd Step 4 − Upgrade the Docker package. $ yum
OpenShift – CLI Operations
OpenShift – CLI Operations ”; Previous Next OpenShift CLI is capable of performing all basic and advance configuration, management, addition, and deployment of applications. We can perform different kinds of operations using OC commands. This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the ”adm” subcommand. Basic Commands Following table lists the basic OC commands. Sr.No. Commands & Description 1 Types An introduction to concepts and type 2 Login Log in to a server 3 new-project Request a new project 4 new-app Create a new application 5 Status Show an overview of the current project 6 Project Switch to another project 7 Projects Display existing projects 8 Explain Documentation of resources 9 Cluster Start and stop OpenShift cluster Login Log in to your server and save the login for subsequent use. First-time users of the client should run this command to connect to a server, establish an authenticated session, and save a connection to the configuration file. The default configuration will be saved to your home directory under “.kube/config”. The information required to login — like username and password, a session token, or the server details can be provided through flags. If not provided, the command will prompt for user input as needed. Usage oc login [URL] [options] Example # Log in interactively oc login # Log in to the given server with the given certificate authority file oc login localhost:8443 –certificate-authority = /path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 –username = myuser –password=mypass Options − -p, –password = “ − Password, will prompt if not provided -u, –username = “ − Username, will prompt if not provided –certificate-authority = “ − Path to a cert. file for the certificate authority –insecure-skip-tls-verify = false − If true, the server”s certificate will not be checked for validity. This will make your HTTPS connections insecure –token = “ − Bearer token for authentication to the API server To get the complete details regarding any command, use the oc <Command Name> –help command. Build and Deploy Commands Following table lists the build and deploy commands. Sr.No. Commands & Description 1 Rollout Manage a Kubernetes deployment or OpenShift deploy 2 Deploy View, start, cancel, or retry a deployment 3 Rollback Revert part of an application back to the previous state 4 new-build Create a new build configuration 5 start-build Start a new build 6 cancel-build Cancel running, pending, or new builds 7 import-image Imports images from a Docker registry 8 Tag Tag the existing images into image streams Application Management Commands Following table lists the application management commands. Sr.No. Commands & Description 1 Get Display one or many resources 2 Describe Show details of a specific resource or a group of resources 3 Edit Edit a resource on the server 4 Set Commands that help set specific features on objects 5 Label Update the labels on a resource 6 Annotate Update the annotations on a resource 7 Expose Expose a replicated application as a service or route 8 Delete Delete one or more resources 9 Scale Change the number of pods in a deployment 10 Autoscale Autoscale a deployment config, deployment, replication, Controller or replica set 11 Secrets Manage secrets 12 Serviceaccounts Manage service accounts in your project Troubleshooting and Debugging Commands Following table lists the troubleshooting and debugging commands. Sr.No. Commands & Description 1 logs Print the logs for a resource 2 Rsh Start a shell session in a pod 3 Rsync Copy files between the local filesystem and a pod 4 port-forward Forward one or more local ports to a pod 5 Debug Launch a new instance of a pod for debugging 6 Exec Execute a command in a container 7 Procy Run a proxy to the Kubernetes API server 9 Attach Attach to a running container 10 Run Run a particular image on the cluster 11 Cp Copy files and directories to and from containers Advanced Commands Following table lists the advanced commands. Sr.No. Commands & Description 1 adm Tools for managing a cluster 2 create Create a resource by filename or stdin 3 replace Replace a resource by filename or stdin 4 apply Apply a configuration to a resource by filename or stdin 5 patch Update field(s) of a resource using strategic merge patch 6 process Process a template into list of resources 7 export Export resources so they can be used elsewhere 8 extract Extract secrets or config maps to disk 9 idle Idle scalable resources 10 observe Observe changes to the resources and react to them (experimental) 11 policy Manage authorization policy 12 auth Inspect authorization 13 convert Convert config files between different API versions 14 import Commands that import applications Setting Commands Following table lists the setting commands. Sr.No. Commands & Description 1 Logout End the current server session 2 Config Change the configuration files for the client 3 Whoami Return information about the current session 4 Completion Output shell completion code for the specified shell (bash or zsh) Print Page Previous Next Advertisements ”;
OpenShift – Architecture
OpenShift – Architecture ”; Previous Next OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker cluster. The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes. Unlike the earlier version of OpenShift V2, the new version of OpenShift V3 supports containerized infrastructure. In this model, Docker helps in creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts. Components of OpenShift One of the key components of OpenShift architecture is to manage containerized infrastructure in Kubernetes. Kubernetes is responsible for Deployment and Management of infrastructure. In any Kubernetes cluster, we can have more than one master and multiple nodes, which ensures there is no point of failure in the setup. Kubernetes Master Machine Components Etcd − It stores the configuration information, which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It should only be accessible by Kubernetes API server as it may have sensitive information. It is a distributed key value Store which is accessible to all. API Server − Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface which means different tools and libraries can readily communicate with it. A kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API”. Controller Manager − This component is responsible for most of the collectors that regulate the state of the cluster and perform a task. It can be considered as a daemon which runs in a non-terminating loop and is responsible for collecting and sending information to API server. It works towards getting the shared state of the cluster and then make changes to bring the current status of the server to a desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoint, etc. Scheduler − It is a key component of Kubernetes master. It is a service in master which is responsible for distributing the workload. It is responsible for tracking the utilization of working load on cluster nodes and then placing the workload on which resources are available and accepting the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating a pod to a new node. Kubernetes Node Components Following are the key components of the Node server, which are necessary to communicate with the Kubernetes master. Docker − The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment. Kubelet Service − This is a small service in each node, which is responsible for relaying information to and from the control plane service. It interacts with etcd store to read the configuration details and Wright values. This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc. Kubernetes Proxy Service − This is a proxy service which runs on each node and helps in making the services available to the external host. It helps in forwarding the request to correct containers. Kubernetes Proxy Service is capable of carrying out primitive load balancing. It makes sure that the networking environment is predictable and accessible but at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers health checkup, etc. Integrated OpenShift Container Registry OpenShift container registry is an inbuilt storage unit of Red Hat, which is used for storing Docker images. With the latest integrated version of OpenShift, it has come up with a user interface to view images in OpenShift internal storage. These registries are capable of holding images with specified tags, which are later used to build containers out of it. Frequently Used Terms Image − Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, Kubernetes only supports Docker images. Each container in a pod has its Docker image running inside it. When configuring a pod, the image property in the configuration file has the same syntax as the Docker command. Project − They can be defined as the renamed version of the domain which was present in the earlier version of OpenShift V2. Container − They are the ones which are created after the image is deployed on a Kubernetes cluster node. Node − A node is a working machine in Kubernetes cluster, which is also known as minion for master. They are working units which can a physical, VM, or a cloud instance. Pod − A pod is a collection of containers and its storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. For example, keeping the database container and web server container inside the pod. Print Page Previous Next Advertisements ”;
OpenShift – Overview
OpenShift – Overview ”; Previous Next OpenShift is a cloud development Platform as a Service (PaaS) hosted by Red Hat. It’s an open source cloud-based user-friendly platform used to create, test, and run applications, and finally deploy them on cloud. OpenShift is capable of managing applications written in different languages, such as Node.js, Ruby, Python, Perl, and Java. One of the key features of OpenShift is it is extensible, which helps the users support the application written in other languages. OpenShift comes with various concepts of virtualization as its abstraction layer. The underlying concept behind OpenShift is based on virtualization. Virtualization In general, virtualization can be defined as the creation of a virtual system rather than physical or actual version of anything starting from system, storage, or an operating system. The main goal of virtualization is to make the IT infrastructure more scalable and reliable. The concept of virtualization has been in existence from decades and with the evolution of IT industry today, it can be applied to a wide range of layers starting from System level, Hardware level, to Server level virtualization. How It Works It can be described as a technology in which any application or operating system is abstracted from its actual physical layer. One key use of the virtualization technology is server virtualization, which uses a software called hypervisor to abstract the layer from the underlying hardware. The performance of an operating system running on virtualization is as good as when it is running on the physical hardware. However, the concept of virtualization is popular as most of the system and application running do not require the use of the underlying hardware. Physical vs Virtual Architecture Types of Virtualization Application Virtualization − In this method, the application is abstracted from the underlying operating system. This method is very useful in which the application can be run in isolation without being dependent on the operating system underneath. Desktop Virtualization − This method is used to reduce the workstation load in which one can access the desktop remotely, using a thin client at the desk. In this method, the desktops are mostly run in a datacenter. A classic example can be a Virtual Desktop Image (VDI) which is used in most of the organizations. Data Virtualization − It is a method of abstracting and getting away from traditional method of data and data management. Server Virtualization − In this method, server-related resources are virtualized which includes the physical server, process, and operating system. The software which enables this abstraction is often referred to as the hypervisor. Storage Virtualization − It is the process of pooling in multiple storage devices into a single storage device that is managed from a single central console. Network Virtualization − It is the method in which all available network resources are combined by splitting up the available bandwidth and channels, each of which is independent of each other. OpenShift OpenShift is a cloud-enabled application Platform as a Service (PaaS). It’s an open source technology which helps organizations move their traditional application infrastructure and platform from physical, virtual mediums to the cloud. OpenShift supports a very large variety of applications, which can be easily developed and deployed on OpenShift cloud platform. OpenShift basically supports three kinds of platforms for the developers and users. Infrastructure as a Service (IaaS) In this format, the service provider provides hardware level virtual machines with some pre-defined virtual hardware configuration. There are multiple competitors in this space starting from AWS Google cloud, Rackspace, and many more. The main drawback of having IaaS after a long procedure of setup and investment is that, one is still responsible for installing and maintaining the operating system and server packages, managing the network of infrastructure, and taking care of the basic system administration. Software as a Service (SaaS) With SaaS, one has the least worry about the underlying infrastructure. It is as simple as plug and play, wherein the user just has to sign up for the services and start using it. The main drawback with this setup is, one can only perform minimal amount of customization, which is allowed by the service provider. One of the most common example of SaaS is Gmail, where the user just needs to login and start using it. The user can also make some minor modifications to his account. However, it is not very useful from the developer’s point of view. Platform as a Service (PaaS) It can be considered as a middle layer between SaaS and IaaS. The primary target of PaaS evaluation is for developers in which the development environment can be spin up with a few commands. These environments are designed in such a way that they can satisfy all the development needs, right from having a web application server with a database. To do this, you just require a single command and the service provider does the stuff for you. Why Use OpenShift? OpenShift provides a common platform for enterprise units to host their applications on cloud without worrying about the underlying operating system. This makes it very easy to use, develop, and deploy applications on cloud. One of the key features is, it provides managed hardware and network resources for all kinds of development and testing. With OpenShift, PaaS developer has the freedom to design their required environment with specifications. OpenShift provides different kind of service level agreement when it comes to service plans. Free − This plan is limited to three years with 1GB space for each. Bronze − This plan includes 3 years and expands up to 16 years with 1GB space per year. Sliver − This is 16-year plan of bronze, however, has a storage capacity of 6GB with no additional cost. Other than the above features, OpenShift also offers on-premises version known as OpenShift Enterprise. In OpenShift, developers have the leverage to design scalable and non-scalable applications and these designs are implemented using HAproxy servers. Features There are multiple features supported by OpenShift.
OpenShift – Basic Concept
OpenShift – Basic Concept ”; Previous Next Before beginning with the actual setup and deployment of applications, we need to understand some basic terms and concepts used in OpenShift V3. Containers and Images Images These are the basic building blocks of OpenShift, which are formed out of Docker images. In each pod on OpenShift, the cluster has its own images running inside it. When we configure a pod, we have a field which will get pooled from the registry. This configuration file will pull the image and deploy it on the cluster node. apiVersion: v1 kind: pod metadata: name: Tesing_for_Image_pull ———–> Name of Pod spec: containers: – name: neo4j-server ————————> Name of the image image: <Name of the Docker image>———-> Image to be pulled imagePullPolicy: Always ————->Image pull policy command: [“echo”, “SUCCESS”] ——————-> Massage after image pull In order to pull and create an image out of it, run the following command. OC is the client to communicate with OpenShift environment after login. $ oc create –f Tesing_for_Image_pull Container This gets created when the Docker image gets deployed on the OpenShift cluster. While defining any configuration, we define the container section in the configuration file. One container can have multiple images running inside and all the containers running on cluster node are managed by OpenShift Kubernetes. spec: containers: – name: py ————————> Name of the container image: python———-> Image going to get deployed on container command: [“python”, “SUCCESS”] restartPocliy: Never ——–> Restart policy of container Following are the specifications for defining a container having multiple images running inside it. apiVersion: v1 kind: Pod metadata: name: Tomcat spec: containers: – name: Tomcat image: tomcat: 8.0 ports: – containerPort: 7500 imagePullPolicy: Always -name: Database Image: mongoDB Ports: – containerPort: 7501 imagePullPolicy: Always In the above configuration, we have defined a multi-container pod with two images of Tomcat and MongoDB inside it. Pods and Services Pods Pod can be defined as a collection of container and its storage inside a node of OpenShift (Kubernetes) cluster. In general, we have two types of pod starting from a single container pod to multi-container pod. Single Container Pod − These can be easily created with OC command or by a basic configuration yml file. $ oc run <name of pod> –image = <name of the image from registry> Create it with a simple yaml file as follows. apiVersion: v1 kind: Pod metadata: name: apache spec: containers: – name: apache image: apache: 8.0 ports: – containerPort: 7500 imagePullPolicy: Always Once the above file is created, it will generate a pod with the following command. $ oc create –f apache.yml Multi-Container Pod − Multi-container pods are those in which we have more than one container running inside it. They are created using yaml files as follows. apiVersion: v1 kind: Pod metadata: name: Tomcat spec: containers: – name: Tomcat image: tomcat: 8.0 ports: – containerPort: 7500 imagePullPolicy: Always -name: Database Image: mongoDB Ports: – containerPort: 7501 imagePullPolicy: Always After creating these files, we can simply use the same method as above to create a container. Service − As we have a set of containers running inside a pod, in the same way we have a service that can be defined as a logical set of pods. It’s an abstracted layer on top of the pod, which provides a single IP and DNS name through which pods can be accessed. Service helps in managing the load balancing configuration and to scale the pod very easily. In OpenShift, a service is a REST object whose deification can be posted to apiService on OpenShift master to create a new instance. apiVersion: v1 kind: Service metadata: name: Tutorial_point_service spec: ports: – port: 8080 targetPort: 31999 Builds and Streams Builds In OpenShift, build is a process of transforming images into containers. It is the processing which converts the source code to an image. This build process works on pre-defined strategy of building source code to image. The build processes multiple strategies and sources. Build Strategies Source to Image − This is basically a tool, which helps in building reproducible images. These images are always in a ready stage to run using the Docker run command. Docker Build − This is the process in which the images are built using Docker file by running simple Docker build command. Custom Build − These are the builds which are used for creating base Docker images. Build Sources Git − This source is used when the git repository is used for building images. The Dockerfile is optional. The configurations from the source code looks like the following. source: type: “Git” git: uri: “https://github.com/vipin/testing.git” ref: “master” contextDir: “app/dir” dockerfile: “FROM openshift/ruby-22-centos7nUSER example” Dockerfile − The Dockerfile is used as an input in the configuration file. source: type: “Dockerfile” dockerfile: “FROM ubuntu: latest RUN yum install -y httpd” Image Streams − Image streams are created after pulling the images. The advantage of an image stream is that it looks for updates on the new version of an image. This is used to compare any number of Docker formatted container images identified by tags. Image streams can automatically perform an action when a new image is created. All the builds and deployments can watch for image action and perform an action accordingly. Following is how we define a build a stream. apiVersion: v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp generation: 1 labels: app: ruby-sample-build selflink: /oapi/v1/namespaces/test/imagestreams/origin-ruby-sample uid: ee2b9405-c68c-11e5-8a99-525400f25e34 spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample tags: – items: – created: 2016-01-29T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-apache-sample generation: 1 image: vklnld908.int.clsa.com/vipin/test tag: latest Routes and Templates Routes In OpenShift, routing is a method of exposing the service to the external world by creating and configuring externally reachable hostname. Routes and endpoints are used to expose the service to the external world, from where the user can use the name connectivity (DNS) to access defined application. In OpenShift, routes are created by using routers which are deployed by OpenShift admin on the cluster. Routers are used to bind HTTP (80) and https (443) ports
OpenShift – Getting Started
OpenShift – Getting Started ”; Previous Next OpenShift consists of two types of medians to create and deploy applications, either by GUI or by CLI. In this chapter, we would be using CLI to create a new application. We would be using OC client to communicate with the OpenShift environment. Creating a New Application In OpenShift, there are three methods of creating a new application. From a source code From an image From a template From a Source Code When we try to create an application from the source code, OpenShift looks for a Docker file that should be present inside the repo, which defines the application build flow. We will use oc new-app to create an application. First thing to keep in mind while using a repo is that , it should point to a origin in the repo from where OpenShift will pull the code and build it. If the repo is cloned on the Docker machine where OC client is installed and the user is inside the same directory, then it can be created using the following command. $ oc new-app . <Hear. Denotes current working directory> Following is an example of trying to build from remote repo for a specific branch. $ oc new-app https://github.com/openshift/Testing-deployment.git#test1 Here, test1 is the branch from where we are trying to create a new application in OpenShift. When specifying a Docker file in the repository, we need to define the build strategy as shown below. $ oc new-app OpenShift/OpenShift-test~https://github.com/openshift/Testingdeployment.git From an Image While building an application using images, the images are present in the local Docker server, in the in-house hosted Docker repository, or on the Docker hub. The only thing that a user needs to make sure is, he has the access to pull images from the hub without any issue. OpenShift has the capability to determine the source used, whether it is a Docker image or a source stream. However, if the user wishes he can explicitly define whether it is an image stream or a Docker image. $ oc new-app – – docker-image tomcat Using an image stream − $ oc new-app tomcat:v1 From a Template Templates can be used for the creation of a new application. It can be an already existing template or creating a new template. Following yaml file is basically a template that can be used for deployment. apiVersion: v1 kind: Template metadata: name: <Name of template> annotations: description: <Description of Tag> iconClass: “icon-redis” tags: <Tages of image> objects: – apiVersion: v1 kind: Pod metadata: name: <Object Specification> spec: containers: image: <Image Name> name: master ports: – containerPort: <Container port number> protocol: <Protocol> labels: redis: <Communication Type> Develop and Deploy a Web Application Developing a New Application in OpenShift In order to create a new application in OpenShift, we have to write a new application code and build it using OpenShift OC build commands. As discussed, we have multiple ways of creating a new image. Here, we will be using a template to build the application. This template will build a new application when run with oc new-app command. The following template will create − Two front-end applications and one database. Along with that, it will create two new services and those applications will get deployed to OpenShift cluster. While building and deploying an application, initially we need to create a namespace in OpenShift and deploy the application under that namespace. Create a new namespace $ oc new-project openshift-test –display-name = “OpenShift 3 Sample” — description = “This is an example project to demonstrate OpenShift v3” Template { “kind”: “Template”, “apiVersion”: “v1”, “metadata”: { “name”: “openshift-helloworld-sample”, “creationTimestamp”: null, “annotations”: { “description”: “This example shows how to create a simple openshift application in openshift origin v3”, “iconClass”: “icon-openshift”, “tags”: “instant-app,openshift,mysql” } } }, Object Definitions Secret definition in a template “objects”: [ { “kind”: “Secret”, “apiVersion”: “v1”, “metadata”: {“name”: “dbsecret”}, “stringData” : { “mysql-user” : “${MYSQL_USER}”, “mysql-password” : “${MYSQL_PASSWORD}” } }, Service definition in a template { “kind”: “Service”, “apiVersion”: “v1”, “metadata”: { “name”: “frontend”, “creationTimestamp”: null }, “spec”: { “ports”: [ { “name”: “web”, “protocol”: “TCP”, “port”: 5432, “targetPort”: 8080, “nodePort”: 0 } ], “selector”: {“name”: “frontend”}, “type”: “ClusterIP”, “sessionAffinity”: “None” }, “status”: { “loadBalancer”: {} } }, Route definition in a template { “kind”: “Route”, “apiVersion”: “v1”, “metadata”: { “name”: “route-edge”, “creationTimestamp”: null, “annotations”: { “template.openshift.io/expose-uri”: “http://{.spec.host}{.spec.path}” } }, “spec”: { “host”: “www.example.com”, “to”: { “kind”: “Service”, “name”: “frontend” }, “tls”: { “termination”: “edge” } }, “status”: {} }, { “kind”: “ImageStream”, “apiVersion”: “v1”, “metadata”: { “name”: “origin-openshift-sample”, “creationTimestamp”: null }, “spec”: {}, “status”: { “dockerImageRepository”: “” } }, { “kind”: “ImageStream”, “apiVersion”: “v1”, “metadata”: { “name”: “openshift-22-ubuntu7”, “creationTimestamp”: null }, “spec”: { “dockerImageRepository”: “ubuntu/openshift-22-ubuntu7” }, “status”: { “dockerImageRepository”: “” } }, Build config definition in a template { “kind”: “BuildConfig”, “apiVersion”: “v1”, “metadata”: { “name”: “openshift-sample-build”, “creationTimestamp”: null, “labels”: {name”: “openshift-sample-build”} }, “spec”: { “triggers”: [ { “type”: “GitHub”, “github”: { “secret”: “secret101” } }, { “type”: “Generic”, “generic”: { “secret”: “secret101”, “allowEnv”: true } }, { “type”: “ImageChange”, “imageChange”: {} }, { “type”: “ConfigChange”} ], “source”: { “type”: “Git”, “git”: { “uri”: https://github.com/openshift/openshift-hello-world.git } }, “strategy”: { “type”: “Docker”, “dockerStrategy”: { “from”: { “kind”: “ImageStreamTag”, “name”: “openshift-22-ubuntu7:latest” }, “env”: [ { “name”: “EXAMPLE”, “value”: “sample-app” } ] } }, “output”: { “to”: { “kind”: “ImageStreamTag”, “name”: “origin-openshift-sample:latest” } }, “postCommit”: { “args”: [“bundle”, “exec”, “rake”, “test”] }, “status”: { “lastVersion”: 0 } } }, Deployment config in a template “status”: { “lastVersion”: 0 } { “kind”: “DeploymentConfig”, “apiVersion”: “v1”, “metadata”: { “name”: “frontend”, “creationTimestamp”: null } }, “spec”: { “strategy”: { “type”: “Rolling”, “rollingParams”: { “updatePeriodSeconds”: 1, “intervalSeconds”: 1, “timeoutSeconds”: 120, “pre”: { “failurePolicy”: “Abort”, “execNewPod”: { “command”: [ “/bin/true” ], “env”: [ { “name”: “CUSTOM_VAR1”, “value”: “custom_value1” } ] } } } } } “triggers”: [ { “type”: “ImageChange”, “imageChangeParams”: { “automatic”: true, “containerNames”: [ “openshift-helloworld” ], “from”: { “kind”: “ImageStreamTag”, “name”: “origin-openshift-sample:latest” } } }, { “type”: “ConfigChange” } ], “replicas”: 2, “selector”: { “name”: