Learn OpenShift – Types working project make money

OpenShift – Types OpenShift came into existence from its base named OpenShift V2, which was mainly based on the concept of gear and cartridges, where each component has its specifications starting from machine creation till application deployment, right from building to deploying the application. Cartridges − They were the focal point of building a new application starting from the type of application the environment requires to run them and all the dependencies satisfied in this section. Gear − It can be defined as the bear metal machine or server with certain specifications regarding the resources, memory, and CPU. They were considered as a fundamental unit for running an application. Application − These simply refer to the application or any integration application that will get deployed and run on OpenShift environment. As we go deeper in the section, we will discuss on different formats and offerings of OpenShift. In the earlier days, OpenShift had three major versions. OpenShift Origin − This was the community addition or open source version of OpenShift. It was also known as upstream project for other two versions. OpenShift Online − It is a pubic PaaS as a service hosted on AWS. OpenShift Enterprise − is the hardened version of OpenShift with ISV and vendor licenses. OpenShift Online OpenShift online is an offering of OpenShift community using which one can quickly build, deploy, and scale containerized applications on the public cloud. It is Red Hat’s public cloud application development and hosting platform, which enables automated provisioning, management and scaling of application which helps the developer focus on writing application logic. Setting Up Account on Red Hat OpenShift Online Step 1 − Go to browser and visit the site Step 2 − If you have a Red Hat account, login to OpenShift account using the Red Hat login ID and password using the following URL. Step 3 − If you do not have a Red Hat account login, then sign up for OpenShift online service using the following link. After login, you will see the following page. Once you have all the things in place, Red Hat will show some basic account details as shown in the following screenshot. Finally, when you are logged in, you will see the following page. OpenShift Container Platform OpenShift container platform is an enterprise platform which helps multiple teams such as development and IT operations team to build and deploy containerized infrastructure. All the containers built in OpenShift uses a very reliable Docker containerization technology, which can be deployed on any data center of publically hosted cloud platforms. OpenShift container platform was formally known as OpenShift Enterprises. It is a Red Hat on-premise private platform as service, built on the core concept of application containers powered by Docker, where orchestration and administration is managed by Kubernetes. In other words, OpenShift brings Docker and Kubernetes together to the enterprise level. It is a container platform software for enterprise units to deploy and manage applicants in an infrastructure of own choice. For example, hosting OpenShift instances on AWS instances. OpenShift container platform is available in two package levels. OpenShift Container Local − This is for those developers who wish to deploy and test applications on the local machine. This package is mainly used by development teams for developing and testing applications. OpenShift Container Lab − This is designed for extended evaluation of application starting from development till deployment to pre-prod environment. OpenShift Dedicated This is another offering added to the portfolio of OpenShift, wherein there is a customer choice of hosting a containerized platform on any of the public cloud of their choice. This gives the end user a true sense of multi-cloud offering, where they can use OpenShift on any cloud which satisfies their needs. This is one of the newest offering of Red Hat where the end user can use OpenShift to build test deploy and run their application on OpenShift which is hosted on cloud. Features of OpenShift Dedicated OpenShift dedicated offers customized solution application platform on public cloud and it is inherited from OpenShift 3 technology. Extensible and Open − This is built on the open concept of Docker and deployed on cloud because of which it is can expend itself as and when required. Portability − As it is built using Docker, the applications running on Docker can easily be shipped from one place to the other, where Docker is supported. Orchestration − With OpenShift 3, one of the key features of container orchestration and cluster management is supported using Kubernetes which came into offering with OpenShift version 3. Automation − This version of OpenShift is enabled with the feature of source code management, build automation, and deployment automation which makes it very popular in the market as a Platform as a Service provider. Competitors of OpenShift Google App Engine − This is Google’s free platform for developing and hosting web applications. Google’s app engine offers fast development and deployment platform. Microsoft Azure − Azure cloud is hosted by Microsoft on their data centers. Amazon Elastic Cloud Compute − They are built-in services provided by Amazon, which help in developing and hosting scalable web applications on cloud. Cloud Foundry − is an open source PaaS platform for Java, Ruby, Python, and Node.js applications. CloudStack − Apache’s CloudStack is a project developed by Citrix and is designed to become a direct competitor of OpenShift and OpenStack. OpenStack − Another cloud technology provided by Red Hat for cloud computing. Kubernetes − It is a direct orchestration and cluster management technology built to manage Docker container.

Learn OpenShift – Administration working project make money

OpenShift – Administration In this chapter, we will cover topics such as how to manage a node, configure a service account, etc. Master and Node Configuration In OpenShift, we need to use the start command along with OC to boot up a new server. While launching a new master, we need to use the master along with the start command, whereas while starting the new node we need to use the node along with the start command. In order to do this, we need to create configuration files for the master as well as for the nodes. We can create a basic configuration file for the master and the node using the following command. For master configuration file $ openshift start master –write-config = /openshift.local.config/master For node configuration file $ oadm create-node-config –node-dir = /openshift.local.config/node-<node_hostname> –node = <node_hostname> –hostnames = <hostname>,<ip_address> Once we run the following commands, we will get the base configuration files that can be used as the starting point for configuration. Later, we can have the same file to boot the new servers. apiLevels: – v1beta3 – v1 apiVersion: v1 assetConfig: logoutURL: “” masterPublicURL: https://172.10.12.1:7449 publicURL: https://172.10.2.2:7449/console/ servingInfo: bindAddress: 0.0.0.0:7449 certFile: master.server.crt clientCA: “” keyFile: master.server.key maxRequestsInFlight: 0 requestTimeoutSeconds: 0 controllers: ”*” corsAllowedOrigins: – 172.10.2.2:7449 – 127.0.0.1 – localhost dnsConfig: bindAddress: 0.0.0.0:53 etcdClientInfo: ca: ca.crt certFile: master.etcd-client.crt keyFile: master.etcd-client.key urls: – https://10.0.2.15:4001 etcdConfig: address: 10.0.2.15:4001 peerAddress: 10.0.2.15:7001 peerServingInfo: bindAddress: 0.0.0.0:7001 certFile: etcd.server.crt clientCA: ca.crt keyFile: etcd.server.key servingInfo: bindAddress: 0.0.0.0:4001 certFile: etcd.server.crt clientCA: ca.crt keyFile: etcd.server.key storageDirectory: /root/openshift.local.etcd etcdStorageConfig: kubernetesStoragePrefix: kubernetes.io kubernetesStorageVersion: v1 openShiftStoragePrefix: openshift.io openShiftStorageVersion: v1 imageConfig: format: openshift/origin-${component}:${version} latest: false kind: MasterConfig kubeletClientInfo: ca: ca.crt certFile: master.kubelet-client.crt keyFile: master.kubelet-client.key port: 10250 kubernetesMasterConfig: apiLevels: – v1beta3 – v1 apiServerArguments: null controllerArguments: null masterCount: 1 masterIP: 10.0.2.15 podEvictionTimeout: 5m schedulerConfigFile: “” servicesNodePortRange: 30000-32767 servicesSubnet: 172.30.0.0/16 staticNodeNames: [] masterClients: externalKubernetesKubeConfig: “” openshiftLoopbackKubeConfig: openshift-master.kubeconfig masterPublicURL: https://172.10.2.2:7449 networkConfig: clusterNetworkCIDR: 10.1.0.0/16 hostSubnetLength: 8 networkPluginName: “” serviceNetworkCIDR: 172.30.0.0/16 oauthConfig: assetPublicURL: https://172.10.2.2:7449/console/ grantConfig: method: auto identityProviders: – challenge: true login: true name: anypassword provider: apiVersion: v1 kind: AllowAllPasswordIdentityProvider masterPublicURL: https://172.10.2.2:7449/ masterURL: https://172.10.2.2:7449/ sessionConfig: sessionMaxAgeSeconds: 300 sessionName: ssn sessionSecretsFile: “” tokenConfig: accessTokenMaxAgeSeconds: 86400 authorizeTokenMaxAgeSeconds: 300 policyConfig: bootstrapPolicyFile: policy.json openshiftInfrastructureNamespace: openshift-infra openshiftSharedResourcesNamespace: openshift projectConfig: defaultNodeSelector: “” projectRequestMessage: “” projectRequestTemplate: “” securityAllocator: mcsAllocatorRange: s0:/2 mcsLabelsPerProject: 5 uidAllocatorRange: 1000000000-1999999999/10000 routingConfig: subdomain: router.default.svc.cluster.local serviceAccountConfig: managedNames: – default – builder – deployer masterCA: ca.crt privateKeyFile: serviceaccounts.private.key privateKeyFile: serviceaccounts.private.key publicKeyFiles: – serviceaccounts.public.key servingInfo: bindAddress: 0.0.0.0:8443 certFile: master.server.crt clientCA: ca.crt keyFile: master.server.key maxRequestsInFlight: 0 requestTimeoutSeconds: 3600 Node configuration files allowDisabledDocker: true apiVersion: v1 dnsDomain: cluster.local dnsIP: 172.10.2.2 dockerConfig: execHandlerName: native imageConfig: format: openshift/origin-${component}:${version} latest: false kind: NodeConfig masterKubeConfig: node.kubeconfig networkConfig: mtu: 1450 networkPluginName: “” nodeIP: “” nodeName: node1.example.com podManifestConfig: path: “/path/to/pod-manifest-file” fileCheckIntervalSeconds: 30 servingInfo: bindAddress: 0.0.0.0:10250 certFile: server.crt clientCA: node-client-ca.crt keyFile: server.key volumeDirectory: /root/openshift.local.volumes This is how the node configuration files look like. Once we have these configuration files in place, we can run the following command to create master and node server. $ openshift start –master-config = /openshift.local.config/master/master- config.yaml –node-config = /openshift.local.config/node-<node_hostname>/node- config.yaml Managing Nodes In OpenShift, we have OC command line utility which is mostly used for carrying out all the operations in OpenShift. We can use the following commands to manage the nodes. For listing a node $ oc get nodes NAME LABELS node1.example.com kubernetes.io/hostname = vklnld1446.int.example.com node2.example.com kubernetes.io/hostname = vklnld1447.int.example.com Describing details about a node $ oc describe node <node name> Deleting a node $ oc delete node <node name> Listing pods on a node $ oadm manage-node <node1> <node2> –list-pods [–pod-selector=<pod_selector>] [-o json|yaml] Evaluating pods on a node $ oadm manage-node <node1> <node2> –evacuate –dry-run [–pod-selector=<pod_selector>] Configuration Authentication In OpenShift master, there is a built-in OAuth server, which can be used for managing authentication. All OpenShift users get the token from this server, which helps them communicate to OpenShift API. There are different kinds of authentication level in OpenShift, which can be configured along with the main configuration file. Allow all Deny all HTPasswd LDAP Basic authentication Request header While defining the master configuration, we can define the identification policy where we can define the type of policy that we wish to use. Allow All Allow All oauthConfig: … identityProviders: – name: Allow_Authontication challenge: true login: true provider: apiVersion: v1 kind: AllowAllPasswordIdentityProvider Deny All This will deny access to all usernames and passwords. oauthConfig: … identityProviders: – name: deny_Authontication challenge: true login: true provider: apiVersion: v1 kind: DenyAllPasswordIdentityProvider HTPasswd HTPasswd is used to validate the username and password against an encrypted file password. For generating an encrypted file, following is the command. $ htpasswd </path/to/users.htpasswd> <user_name> Using the encrypted file. oauthConfig: … identityProviders: – name: htpasswd_authontication challenge: true login: true provider: apiVersion: v1 kind: HTPasswdPasswordIdentityProvider file: /path/to/users.htpasswd LDAP Identity Provider This is used for LDAP authentication wherein LDAP server plays a key role in authentication. oauthConfig: … identityProviders: – name: “ldap_authontication” challenge: true login: true provider: apiVersion: v1 kind: LDAPPasswordIdentityProvider attributes: id: – dn email: – mail name: – cn preferredUsername: – uid bindDN: “” bindPassword: “” ca: my-ldap-ca-bundle.crt insecure: false url: “ldap://ldap.example.com/ou=users,dc=acme,dc=com?uid” Basic Authentication This is used when the validation of username and password is done against a server-to-server authentication. The authentication is protected in the base URL and is presented in JSON format. oauthConfig: … identityProviders: – name: my_remote_basic_auth_provider challenge: true login: true provider: apiVersion: v1 kind: BasicAuthPasswordIdentityProvider url: https://www.vklnld908.int.example.com/remote-idp ca: /path/to/ca.file certFile: /path/to/client.crt keyFile: /path/to/client.key Configuring a Service Account Service accounts provide a flexible way of accessing OpenShift API exposing the username and password for authentication. Enabling a Service Account Service account uses a key pair of public and private key for authentication. Authentication to API is done using a private key and validating it against a public key. ServiceAccountConfig: … masterCA: ca.crt privateKeyFile: serviceaccounts.private.key publicKeyFiles: – serviceaccounts.public.key – … Creating a Service Account Use the following command to create a service account $ Openshift cli create service account <name of server account> Working with HTTP Proxy In most of the production environment, direct access to Internet is restricted. They are either not exposed to Internet or they are exposed via a HTTP

Learn OpenShift – Application Scaling working project make money

OpenShift – Application Scaling Autoscaling is a feature in OpenShift where the applications deployned can scale and sink as and when requierd as per certain specifications. In OpenShift application, autoscaling is also known as pod autoscaling. There are two types of application scaling as follows. Vertical Scaling Vertical scaling is all about adding more and more power to a single machine which means adding more CPU and hard disk. The is an old method of OpenShift which is now not supported by OpenShift releases. Horizontal Scaling This type of scaling is useful when there is a need of handling more request by increasing the number of machines. In OpenShift, there are two methods to enable the scaling feature. Using the deployment configuration file While running the image Using Deployment Configuration File In this method, the scaling feature is enabled via a deploymant configuration yaml file. For this, OC autoscale command is used with minimum and maximum number of replicas, which needs to run at any given point of time in the cluster. We need an object definition for the creation of autoscaler. Following is an example of pod autoscaler definition file. apiVersion: extensions/v1beta1 kind: HorizontalPodAutoscaler metadata: name: database spec: scaleRef: kind: DeploymentConfig name: database apiVersion: v1 subresource: scale minReplicas: 1 maxReplicas: 10 cpuUtilization: targetPercentage: 80 Once we have the file in place, we need to save it with yaml format and run the following command for deployment. $ oc create –f <file name>.yaml While Running the Image One can also autoscale without the yaml file, by using the following oc autoscale command in oc command line. $ oc autoscale dc/database –min 1 –max 5 –cpu-percent = 75 deploymentconfig “database” autoscaled This command will also generate a similar kind of file that can later be used for reference. Deployment Strategies in OpenShift Deployment strategy in OpenShift defines a flow of deployment with different available methods. In OpenShift, following are the important types of deployment strategies. Rolling strategy Recreate strategy Custom strategy Following is an example of deployment configuration file, which is used mainly for deployment on OpenShift nodes. kind: “DeploymentConfig” apiVersion: “v1” metadata: name: “database” spec: template: metadata: labels: name: “Database1” spec: containers: – name: “vipinopenshifttest” image: “openshift/mongoDB” ports: – containerPort: 8080 protocol: “TCP” replicas: 5 selector: name: “database” triggers: – type: “ConfigChange” – type: “ImageChange” imageChangeParams: automatic: true containerNames: – “vipinopenshifttest” from: kind: “ImageStreamTag” name: “mongoDB:latest” strategy: type: “Rolling” In the above Deploymentconfig file, we have the strategy as Rolling. We can use the following OC command for deployment. $ oc deploy <deployment_config> –latest Rolling Strategy Rolling strategy is used for rolling updates or deployment. This process also supports life-cycle hooks, which are used for injecting code into any deployment process. strategy: type: Rolling rollingParams: timeoutSeconds: <time in seconds> maxSurge: “<definition in %>” maxUnavailable: “<Defintion in %>” pre: {} post: {} Recreate Strategy This deployment strategy has some of the basic features of rolling deployment strategy and it also supports life-cycle hook. strategy: type: Recreate recreateParams: pre: {} mid: {} post: {} Custom Strategy This is very helpful when one wishes to provide his own deployment process or flow. All the customizations can be done as per the requirement. strategy: type: Custom customParams: image: organization/mongoDB command: [ “ls -l”, “$HOME” ] environment: – name: VipinOpenshiftteat value: Dev1

Learn OpenShift – Home working project make money

OpenShift Tutorial Job Search OpenShift is a cloud development Platform as a Service (PaaS) developed by Red Hat. It is an open source development platform, which enables the developers to develop and deploy their applications on cloud infrastructure. It is very helpful in developing cloud-enabled services. This tutorial will help you understand OpenShift and how it can be used in the existing infrastructure. All the examples and code snippets used in this tutorial are tested and working code, which can be simply used in any OpenShift setup by changing the current defined names and variables. Audience This tutorial has been prepared for those who want to understand the features and functionalities of OpenShift and learn how it can help in building cloud-enabled services and applications. After completing this tutorial, readers will be at a moderate level of understanding of OpenShift and its key building block. It will also give a fair idea on how to configure OpenShift in a preconfigured infrastructure and use it. Prerequisites Readers who want to understand and learn OpenShift should have a basic knowledge of Docker and Kubernetes. Readers also need to have some understanding of system administration, infrastructure, and network protocol communication.

OpenShift – Security

OpenShift – Security ”; Previous Next OpenShift security is mainly a combination of two components that mainly handles security constraints. Security Context Constraints (SCC) Service Account Security Context Constraints (SCC) It is basically used for pod restriction, which means it defines the limitations for a pod, as in what actions it can perform and what all things it can access in the cluster. OpenShift provides a set of predefined SCC that can be used, modified, and extended by the administrator. $ oc get scc NAME PRIV CAPS HOSTDIR SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY anyuid false [] false MustRunAs RunAsAny RunAsAny RunAsAny 10 hostaccess false [] true MustRunAs MustRunAsRange RunAsAny RunAsAny <none> hostmount-anyuid false [] true MustRunAs RunAsAny RunAsAny RunAsAny <none> nonroot false [] false MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> privileged true [] true RunAsAny RunAsAny RunAsAny RunAsAny <none> restricted false [] false MustRunAs MustRunAsRange RunAsAny RunAsAny <none> If one wishes to use any pre-defined scc, that can be done by simply adding the user or the group to the scc group. $ oadm policy add-user-to-scc <scc_name> <user_name> $ oadm policy add-group-to-scc <scc_name> <group_name> Service Account Service accounts are basically used to control access to OpenShift master API, which gets called when a command or a request is fired from any of the master or node machine. Any time an application or a process requires a capability that is not granted by the restricted SCC, you will have to create a specific service account and add the account to the respective SCC. However, if a SCC does not suit your requirement, then it is better to create a new SCC specific to your requirement rather than using the one that is a best fit. In the end, set it for the deployment configuration. $ oc create serviceaccount Cadmin $ oc adm policy add-scc-to-user vipin -z Cadmin Container Security In OpenShift, security of containers is based on the concept of how secure the container platform is and where are the containers running. There are multiple things that come into picture when we talk about container security and what needs to be taken care of. Image Provenance − A secure labeling system is in place that identifies exactly and incontrovertibly where the containers running in the production environment came from. Security Scanning − An image scanner automatically checks all the images for known vulnerabilities. Auditing − The production environment is regularly audited to ensure all containers are based on up-to-date containers, and both hosts and containers are securely configured. Isolation and Least Privilege − Containers run with the minimum resources and privileges needed to function effectively. They are not able to unduly interfere with the host or other containers. Runtime Threat Detection − A capability that detects active threats against containerized application in runtime and automatically responds to it. Access Controls − Linux security modules, such as AppArmor or SELinux, are used to enforce access controls. There are few key methods by which container security is archived. Controlling access via oAuth Via self-service web console By Certificates of platform Controlling Access via OAuth In this method, authentication to API control access is archived getting a secured token for authentication via OAuth servers, which comes inbuilt in OpenShift master machine. As an administrator, you have the capability to modify the configuration of OAuth server configuration. For more details on OAuth server configuration, refer to Chapter 5 of this tutorial. Via Self-Service Web Console This web console security feature is inbuilt in OpenShift web console. This console ensures that all the teams working together do not have access to other environments without authentication. The multi-telnet master in OpenShift has the following security features − TCL layer is enabled Uses x.509 certificate for authentication Secures the etcd configuration on the master machine By Certificates of Platform In this method, certificates for each host is configured during installation via Ansible. As it uses HTTPS communication protocol via Rest API, we need TCL secured connection to different components and objects. These are pre-defined certificates, however, one can even have a custom certificate installed on the cluster of master for access. During the initial setup of the master, custom certificates can be configured by overriding the existing certificates using openshift_master_overwrite_named_certificates parameter. Example openshift_master_named_certificates = [{“certfile”: “/path/on/host/to/master.crt”, “keyfile”: “/path/on/host/to/master.key”, “cafile”: “/path/on/host/to/mastercert.crt”}] For more detail on how to generate custom certificates, visit the following link − https://www.linux.com/learn/creating-self-signed-ssl-certificates-apache-linux Network Security In OpenShift, Software Defined Networking (SDN) is used for communication. Network namespace is used for each pod in the cluster, wherein each pod gets its own IP and a range of ports to get network traffic on it. By this method, one can isolate pods because of which it cannot communicate with pods in the other project. Isolating a Project This can be done by the cluster admin using the following oadm command from CLI. $ oadm pod-network isolate-projects <project name 1> <project name 2> This means that the projects defined above cannot communicate with other projects in the cluster. Volume Security Volume security clearly means securing the PV and PVC of projects in OpenShift cluster. There are mainly four sections to control access to volumes in OpenShift. Supplemental Groups fsGroup runAsUser seLinuxOptions Supplemental Groups − Supplemental groups are regular Linux groups. When a process runs in the system, it runs with a user ID and group ID. These groups are used for controlling access to shared storage. Check the NFS mount using the following command. # showmount -e <nfs-server-ip-or-hostname> Export list for f21-nfs.vm: /opt/nfs * Check NFS details on the mount server using the following command. # cat /etc/exports /opt/nfs *(rw,sync,no_root_squash) … # ls -lZ /opt/nfs -d drwxrws—. nfsnobody 2325 unconfined_u:object_r:usr_t:s0 /opt/nfs # id nfsnobody uid = 65534(nfsnobody) gid = 454265(nfsnobody) groups = 454265(nfsnobody) The /opt/nfs/ export is accessible by UID 454265 and the group 2325. apiVersion: v1 kind: Pod … spec: containers: – name: … volumeMounts: – name: nfs mountPath: /usr/share/… securityContext: supplementalGroups: [2325] volumes: – name: nfs nfs: server: <nfs_server_ip_or_host> path: /opt/nfs fsGroup fsGroup stands

OpenShift – Useful Resources

OpenShift – Useful Resources ”; Previous Next The following resources contain additional information on OpenShift. Please use them to get more in-depth knowledge on this topic. Useful Video Courses Learn Openshift 4.X with Live Hands-On Katakoda 71 Lectures 4 hours Cloud Passion More Detail MASTER Redhat Openshift- The Container Orchestration 20 Lectures 1 hours Pranjal Srivastava More Detail Kubernetes and Openshift Masterclass 27 Lectures 1.5 hours Pranjal Srivastava More Detail DevOps Advance Course – From Theory to Practice 31 Lectures 3.5 hours Jadranko Kovacec More Detail Openshift Video Course 18 Lectures 48 mins Pedro Planas More Detail Ansible AWX Web-UI and API By 10+ Examples 12 Lectures 2 hours Luca Berton More Detail Print Page Previous Next Advertisements ”;

OpenShift – Discussion

Discuss OpenShift ”; Previous Next OpenShift is a cloud development Platform as a Service (PaaS) developed by Red Hat. It is an open source development platform, which enables the developers to develop and deploy their applications on cloud infrastructure. It is very helpful in developing cloud-enabled services. This tutorial will help you understand OpenShift and how it can be used in the existing infrastructure. All the examples and code snippets used in this tutorial are tested and working code, which can be simply used in any OpenShift setup by changing the current defined names and variables. Print Page Previous Next Advertisements ”;

OpenShift – CLI

OpenShift – CLI ”; Previous Next OpenShift CLI is used for managing OpenShift applications from the command line. OpenShift CLI has the capability to manage end-to-end application life cycle. In general, we would be using OC which is an OpenShift client to communicate with OpenShift. OpenShift CLI Setup In order to set up the OC client on a different operating system, we need to go through different sequence of steps. OC Client for Windows Step 1 − Download the oc cli from the following link https://github.com/openshift/origin/releases/tag/v3.6.0-alpha.2 Step 2 − Unzip the package on a target path on the machine. Step 3 − Edit the path environment variable of the system. C:Usersxxxxxxxxxxxxxxxx>echo %PATH% C:oraclexeapporacleproduct10.2.0serverbin;C:Program Files (x86)InteliCLS Client;C:Program FilesInteliCLS Client;C:Program Files (x86)AMD APPbinx86_64;C:Program Files (x86)AMD APPbinx86; C:Windowssystem32;C:Windows;C:WindowsSystem32Wbem;C:WindowsSystem32WindowsPowerShell v1.0;C:Program Files (x86)Windows LiveShared;C:Program Files (x86)ATI TechnologiesATI.ACEC ore-Static;C:Program FilesIntelIntel(R) Management Engine ComponentsDAL;C:Program FilesIntelIntel(R) Management Engine ComponentsIPT;C:Program Files (x86)IntelIntel(R) Management Engine ComponentsDAL; Step 4 − Validate the OC setup on Windows. C:openshift-origin-client-tools-v3.6.0-alpha.2-3c221d5-windows>oc version oc v3.6.0-alpha.2+3c221d5 kubernetes v1.6.1+5115d708d7 features: Basic-Auth OC Client for Mac OS X We can download the Mac OS setup binaries for the same location as for Windows and later unzip it at a location and set a path of executable under the environment PATH variable. Alternatively We can use Home brew and set it up using the following command. $ brew install openshift-cli OC Client for Linux Under the same page, we have the tar file for Linux installation that can be used for installation. Later, a path variable can be set pointing to that particular executable location. https://github.com/openshift/origin/releases/tag/v3.6.0-alpha.2 Unpack the tar file using the following command. $ tar –xf < path to the OC setup tar file > Run the following command to check the authentication. C:openshift-origin-client-tools-v3.6.0-alpha.2-3c221d5-windows>oc login Server [https://localhost:8443]: CLI Configuration Files OC CLI configuration file is used for managing multiple OpenShift server connection and authentication mechanism. This configuration file is also used for storing and managing multiple profiles and for switching between them. A normal configuration file looks like the following. $ oc config view apiVersion: v1 clusters: – cluster: server: https://vklnld908.int.example.com name: openshift contexts: – context: cluster: openshift namespace: testproject user: alice name: alice current-context: alice kind: Config preferences: {} users: – name: vipin user: token: ZCJKML2365jhdfafsdj797GkjgjGKJKJGjkg232 Setting Up CLI Client For setting user credential $ oc config set-credentials <user_nickname> [–client-certificate = <path/to/certfile>] [–client-key=<path/to/keyfile>] [–token = <bearer_token>] [–username = <basic_user>] [–password = <basic_password>] For setting cluster $ oc config set-cluster <cluster_nickname> [–server = <master_ip_or_fqdn>] [–certificate-authority = <path/to/certificate/authority>] [–api-version = <apiversion>] [–insecure-skip-tls-verify = true] Example $ oc config set-credentials vipin –token = ZCJKML2365jhdfafsdj797GkjgjGKJKJGjkg232 For setting context $ oc config set-context <context_nickname> [–cluster = <cluster_nickname>] [–user = <user_nickname>] [–namespace = <namespace>] CLI Profiles In a single CLI configuration file, we can have multiple profiles wherein each profile has a different OpenShift server configuration, which later can be used for switching between different CLI profiles. apiVersion: v1 clusters: –→ 1 – cluster: insecure-skip-tls-verify: true server: https://vklnld908.int.example.com:8443 name: vklnld908.int.example.com:8443 – cluster: insecure-skip-tls-verify: true server: https://vklnld1446.int.example.com:8443 name: vklnld1446.int.example.com:8443 contexts: —→ 2 – context: cluster: vklnld908.int.example.com:8443 namespace: openshift-project user: vipin/vklnld908.int.example.com:8443 name: openshift-project/vklnld908.int.example.com:8443/vipin – context: cluster: vklnld908.int.example.com:8443 namespace: testing-project user: alim/vklnld908.int.example.com:8443 name: testproject-project/openshift1/alim current-context: testing-project/vklnld908.int.example.com:8443/vipin - 3 kind: Config preferences: {} users: – name: vipin/vklnld908.int.example.com:8443 user: —→ 4 token: ZCJKML2365jhdfafsdj797GkjgjGKJKJGjkg232 In the above configuration, we can see it is divided into four main sections starting from cluster which defines two instances of OpenShift master machines. Second context section defines two contexts named vipin and alim. The current context defines which context is currently in use. It can be changed to other context or profile, if we change the definition here. Finally, the user definition and its authentication token is defined which in our case is vipin. If we want to check the current profile in use, it can be done using − $ oc status oc status In project testing Project (testing-project) $ oc project Using project “testing-project” from context named “testing- project/vklnld908.int.example.com:8443/vipin” on server “https://vklnld908.int.example.com:8443”. If we want to switch to other CLI, it can be done from the command line using the following command. $ oc project openshift-project Now using project “Openshift-project” on server ” https://vklnld908.int.example.com:8443″. Using the above command, we can switch between profiles. At any point of time, if we wish to view the configuration, we can use $ oc config view command. Print Page Previous Next Advertisements ”;

OpenShift – Docker and Kubernetes

OpenShift – Docker and Kubernetes ”; Previous Next OpenShift is built on top of Docker and Kubernetes. All the containers are built on top of Docker cluster, which is basically Kubernetes service on top of Linux machines, using Kubernetes orchestrations feature. In this process, we build Kubernetes master which controls all the nodes and deploys the containers to all the nodes. The main function of Kubernetes is to control OpenShift cluster and deployment flow using a different kind of configuration file. As in Kubernetes, we use kubctl in the same way we use OC command line utility to build and deploy containers on cluster nodes. Following are the different kinds of config files used for creation of different kind of objects in the cluster. Images POD Service Replication Controller Replica set Deployment Images Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, Kubernetes only support Docker images. Each container in a pod has its Docker image running inside it. apiVersion: v1 kind: pod metadata: name: Tesing_for_Image_pull ———–> 1 spec: containers: – name: neo4j-server ————————> 2 image: <Name of the Docker image>———-> 3 imagePullPolicy: Always ————->4 command: [“echo”, “SUCCESS”] ——————-> 5 POD A pod is collection of containers and its storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. Following is an example of keeping a database container and web interface container in the same pod. apiVersion: v1 kind: Pod metadata: name: Tomcat spec: containers: – name: Tomcat image: tomcat: 8.0 ports: – containerPort: 7500 imagePullPolicy: Always Service A service can be defined as a logical set of pods. It can be defined as an abstraction on top of the pod that provides a single IP address and DNS name by which pods can be accessed. With Service, it is very easy to manage load balancing configuration. It helps PODs to scale very easily. apiVersion: v1 kind: Service metadata: name: Tutorial_point_service spec: ports: – port: 8080 targetPort: 31999 Replication Controller Replication Controller is one of the key features of Kubernetes, which is responsible for managing the pod lifecycle. It is responsible for making sure that specified numbers of pod replicas are running at any point of time. apiVersion: v1 kind: ReplicationController metadata: name: Tomcat-ReplicationController spec: replicas: 3 template: metadata: name: Tomcat-ReplicationController labels: app: App component: neo4j spec: containers: – name: Tomcat image: tomcat: 8.0 ports: – containerPort: 7474 Replica Set The replica set ensures how many replica of pod should be running. It can be considered as a replacement of the replication controller. apiVersion: extensions/v1beta1 kind: ReplicaSet metadata: name: Tomcat-ReplicaSet spec: replicas: 3 selector: matchLables: tier: Backend matchExpression: – { key: tier, operation: In, values: [Backend]} app: App component: neo4j spec: containers: – name: Tomcat- image: tomcat: 8.0 ports: containerPort: 7474 Deployment Deployments are upgraded and higher versions of the replication controller. They manage the deployment of replica sets, which is also an upgraded version of the replication controller. They have the capability to update the replica set and they are also capable of rolling back to the previous version. apiVersion: extensions/v1beta1 ———————>1 kind: Deployment ————————–> 2 metadata: name: Tomcat-ReplicaSet spec: replicas: 3 template: metadata: lables: app: Tomcat-ReplicaSet tier: Backend spec: containers: name: Tomcat- image: tomcat: 8.0 ports: – containerPort: 7474 All config files can be used to create their respective Kubernetes objects. $ Kubectl create –f <file name>.yaml Following commands can be used to know the details and description of the Kubernetes objects. For POD $ Kubectl get pod <pod name> $ kubectl delete pod <pod name> $ kubectl describe pod <pod name> For Replication Controller $ Kubectl get rc <rc name> $ kubectl delete rc <rc name> $ kubectl describe rc <rc name> For Service $ Kubectl get svc <svc name> $ kubectl delete svc <svc name> $ kubectl describe svc <svc name> For more details on how to work with Docker and Kubernetes, please visit our Kubernetes tutorial using the following link kubernetes. Print Page Previous Next Advertisements ”;

OpenShift – Quick Guide

OpenShift – Quick Guide ”; Previous Next OpenShift – Overview OpenShift is a cloud development Platform as a Service (PaaS) hosted by Red Hat. It’s an open source cloud-based user-friendly platform used to create, test, and run applications, and finally deploy them on cloud. OpenShift is capable of managing applications written in different languages, such as Node.js, Ruby, Python, Perl, and Java. One of the key features of OpenShift is it is extensible, which helps the users support the application written in other languages. OpenShift comes with various concepts of virtualization as its abstraction layer. The underlying concept behind OpenShift is based on virtualization. Virtualization In general, virtualization can be defined as the creation of a virtual system rather than physical or actual version of anything starting from system, storage, or an operating system. The main goal of virtualization is to make the IT infrastructure more scalable and reliable. The concept of virtualization has been in existence from decades and with the evolution of IT industry today, it can be applied to a wide range of layers starting from System level, Hardware level, to Server level virtualization. How It Works It can be described as a technology in which any application or operating system is abstracted from its actual physical layer. One key use of the virtualization technology is server virtualization, which uses a software called hypervisor to abstract the layer from the underlying hardware. The performance of an operating system running on virtualization is as good as when it is running on the physical hardware. However, the concept of virtualization is popular as most of the system and application running do not require the use of the underlying hardware. Physical vs Virtual Architecture Types of Virtualization Application Virtualization − In this method, the application is abstracted from the underlying operating system. This method is very useful in which the application can be run in isolation without being dependent on the operating system underneath. Desktop Virtualization − This method is used to reduce the workstation load in which one can access the desktop remotely, using a thin client at the desk. In this method, the desktops are mostly run in a datacenter. A classic example can be a Virtual Desktop Image (VDI) which is used in most of the organizations. Data Virtualization − It is a method of abstracting and getting away from traditional method of data and data management. Server Virtualization − In this method, server-related resources are virtualized which includes the physical server, process, and operating system. The software which enables this abstraction is often referred to as the hypervisor. Storage Virtualization − It is the process of pooling in multiple storage devices into a single storage device that is managed from a single central console. Network Virtualization − It is the method in which all available network resources are combined by splitting up the available bandwidth and channels, each of which is independent of each other. OpenShift OpenShift is a cloud-enabled application Platform as a Service (PaaS). It’s an open source technology which helps organizations move their traditional application infrastructure and platform from physical, virtual mediums to the cloud. OpenShift supports a very large variety of applications, which can be easily developed and deployed on OpenShift cloud platform. OpenShift basically supports three kinds of platforms for the developers and users. Infrastructure as a Service (IaaS) In this format, the service provider provides hardware level virtual machines with some pre-defined virtual hardware configuration. There are multiple competitors in this space starting from AWS Google cloud, Rackspace, and many more. The main drawback of having IaaS after a long procedure of setup and investment is that, one is still responsible for installing and maintaining the operating system and server packages, managing the network of infrastructure, and taking care of the basic system administration. Software as a Service (SaaS) With SaaS, one has the least worry about the underlying infrastructure. It is as simple as plug and play, wherein the user just has to sign up for the services and start using it. The main drawback with this setup is, one can only perform minimal amount of customization, which is allowed by the service provider. One of the most common example of SaaS is Gmail, where the user just needs to login and start using it. The user can also make some minor modifications to his account. However, it is not very useful from the developer’s point of view. Platform as a Service (PaaS) It can be considered as a middle layer between SaaS and IaaS. The primary target of PaaS evaluation is for developers in which the development environment can be spin up with a few commands. These environments are designed in such a way that they can satisfy all the development needs, right from having a web application server with a database. To do this, you just require a single command and the service provider does the stuff for you. Why Use OpenShift? OpenShift provides a common platform for enterprise units to host their applications on cloud without worrying about the underlying operating system. This makes it very easy to use, develop, and deploy applications on cloud. One of the key features is, it provides managed hardware and network resources for all kinds of development and testing. With OpenShift, PaaS developer has the freedom to design their required environment with specifications. OpenShift provides different kind of service level agreement when it comes to service plans. Free − This plan is limited to three years with 1GB space for each. Bronze − This plan includes 3 years and expands up to 16 years with 1GB space per year. Sliver − This is 16-year plan of bronze, however, has a storage capacity of 6GB with no additional cost. Other than the above features, OpenShift also offers on-premises version known as OpenShift Enterprise. In OpenShift, developers have the leverage to design scalable and non-scalable applications and these designs are implemented using HAproxy servers. Features There are multiple