”;
OpenShift uses two installation methods of setting up OpenShift cluster.
- Quick installation method
- Advanced configuration method
Setting Up Cluster
Quick Installation Method
This method is used for running a quick unattained cluster setup configuration. In order to use this method, we need to first install the installer. This can be done by running the following command.
Interactive method
$ atomic-openshift-installer install
This is useful when one wishes to run an interactive setup.
Unattended installation method
This method is used when one wishes to set up an unattended way of installation method, wherein the user can define a configuration yaml file and place it under ~/.config/openshift/ with the name of installer.cfg.yml. Then, the following command can be run to install the –u tag.
$ atomic-openshift-installer –u install
By default it uses the config file located under ~/.config/openshift/. Ansible on the other hand is used as a backup of installation.
version: v2 variant: openshift-enterprise variant_version: 3.1 ansible_log_path: /tmp/ansible.log deployment: ansible_ssh_user: root hosts: - ip: 172.10.10.1 hostname: vklnld908.int.example.com public_ip: 24.222.0.1 public_hostname: master.example.com roles: - master - node containerized: true connect_to: 24.222.0.1 - ip: 172.10.10.2 hostname: vklnld1446.int.example.com public_ip: 24.222.0.2 public_hostname: node1.example.com roles: - node connect_to: 10.0.0.2 - ip: 172.10.10.3 hostname: vklnld1447.int.example.com public_ip: 10..22.2.3 public_hostname: node2.example.com roles: - node connect_to: 10.0.0.3 roles: master: <variable_name1>: "<value1>" <variable_name2>: "<value2>" node: <variable_name1>: "<value1>"
Here, we have role-specific variable, which can be defined if one wishes to set up some specific variable.
Once done, we can verify the installation using the following command.
$ oc get nodes NAME STATUS AGE master.example.com Ready 10d node1.example.com Ready 10d node2.example.com Ready 10d
Advanced Installation
Advanced installation is completely based on Ansible configuration wherein the complete host configuration and variables definition regarding master and node configuration is present. This contains all the details regarding the configuration.
Once we have the setup and the playbook is ready, we can simply run the following command to setup the cluster.
$ ansible-playbook -i inventry/hosts ~/openshift-ansible/playbooks/byo/config.yml
Adding Hosts to a Cluster
We can add a host to the cluster using −
- Quick installer tool
- Advanced configuration method
Quick installation tool works in both interactive and non-interactive mode. Use the following command.
$ atomic-openshift-installer -u -c </path/to/file> scaleup
Format of scaling the application configuration file looks can be used for adding both master as well as node.
Advanced Configuration Method
In this method, we update the host file of Ansible and then add a new node or server details in this file. Configuration file looks like the following.
[OSEv3:children] masters nodes new_nodes new_master
In the same Ansible hosts file, add variable details regarding the new node as shown below.
[new_nodes] vklnld1448.int.example.com openshift_node_labels = "{''region'': ''primary'', ''zone'': ''east''}"
Finally, using the updated host file, run the new configuration and invoke the configuration file to get the setup done using the following command.
$ ansible-playbook -i /inventory/hosts /usr/share/ansible/openshift-ansible/playbooks/test/openshift-node/scaleup.yml
Managing Cluster Logs
OpenShift cluster log is nothing but the logs which are getting generated from the master and the node machines of cluster. These can manage any kind of log, starting from server log, master log, container log, pod log, etc. There are multiple technologies and applications present for container log management.
Few of the tools are as listed, which can be implemented for log management.
- Fluentd
- ELK
- Kabna
- Nagios
- Splunk
ELK stack − This stack is useful while trying to collect the logs from all the nodes and present them in a systematic format. ELK stack is mainly divided in three major categories.
ElasticSearch − Mainly resposible for collecting information from all the containers and putting it into a central location.
Fluentd − Used for feeding collected logs to elasticsearch container engine.
Kibana − A graphical interface used for presenting the collected data as a useful information in a graphical interface.
One key point to note is, when this system is deployed on the cluster it starts collecting logs from all the nodes.
Log Diagnostics
OpenShift has an inbuilt oc adm dignostics command with OC that can be used for analyzing multiple error situations. This tool can be used from the master as a cluster administrator. This utility is very helpful is troubleshooting and dignosing known problems. This runs on the master client and nodes.
If run without any agruments or flags, it will look for configuration files of the client, server, and node machnies, and use them for diagnostics. One can run the diagnostics individually by passing the following arguments −
- AggregatedLogging
- AnalyzeLogs
- ClusterRegistry
- ClusterRoleBindings
- ClusterRoles
- ClusterRouter
- ConfigContexts
- DiagnosticPod
- MasterConfigCheck
- MasterNode
- MetricsApiProxy
- NetworkCheck
- NodeConfigCheck
- NodeDefinitions
- ServiceExternalIPs
- UnitStatus
One can simply run them with the following command.
$ oc adm diagnostics <DiagnosticName>
Upgrading a Cluster
Upgradation of the cluster involves upgrading multiple things within the cluster and getiing the cluster updated with new components and upgrdes. This involves −
- Upgradation of master components
- Upgradation of node components
- Upgradation of policies
- Upgradation of routes
- Upgradation of image stream
In order to perform all these upgrades, we need to first get quick installers or utils in place. For that we need to update the following utilities −
- atomic-openshift-utils
- atomic-openshift-excluder
- atomic-openshift-docker-excluder
- etcd package
Before starting the upgrade, we need to backup etcd on the master machine, which can be done using the following commands.
$ ETCD_DATA_DIR = /var/lib/origin/openshift.local.etcd $ etcdctl backup --data-dir $ETCD_DATA_DIR --backup-dir $ETCD_DATA_DIR.bak.<date>
Upgradation of Master Components
In OpenShift master, we start the upgrade by updating the etcd file and then moving on to Docker. Finally, we run the automated executer to get the cluster into the required position. However, before starting the upgrade we need to first activate the atomic openshift packages on each of the masters. This can be done using the following commands.
Step 1 − Remove atomic-openshift packages
$ atomic-openshift-excluder unexclude
Step 2 − Upgrade etcd on all the masters.
$ yum update etcd
Step 3 − Restart the service of etcd and check if it has started successfully.
$ systemctl restart etcd $ journalctl -r -u etcd
Step 4 − Upgrade the Docker package.
$ yum update docker
Step 5 − Restart the Docker service and check if it is correctly up.
$ systemctl restart docker $ journalctl -r -u docker
Step 6 − Once done, reboot the system with the following commands.
$ systemctl reboot $ journalctl -r -u docker
Step 7 − Finally, run the atomic-executer to get the packages back to the list of yum excludes.
$ atomic-openshift-excluder exclude
There is no such compulsion for upgrading the policy, it only needs to be upgraded if recommended, which can be checked with the following command.
$ oadm policy reconcile-cluster-roles
In most of the cases, we don’t need to update the policy definition.
Upgradation of Node Components
Once the master update is complete, we can start upgrading the nodes. One thing to keep in mind is, the period of upgrade should be short in order to avoid any kind of issue in the cluster.
Step 1 − Remove all atomic OpenShift packages from all the nodes where you wish to perform the upgrade.
$ atomic-openshift-excluder unexclude
Step 2 − Next, disable node scheduling before upgrade.
$ oadm manage-node <node name> --schedulable = false
Step 3 − Replicate all the node from the current host to the other host.
$ oadm drain <node name> --force --delete-local-data --ignore-daemonsets
Step 4 − Upgrade Docker setup on host.
$ yum update docker
Step 5 − Restart Docker service and then start the Docker service node.
$systemctl restart docker $ systemctl restart atomic-openshift-node
Step 6 − Check if both of them started correctly.
$ journalctl -r -u atomic-openshift-node
Step 7 − After upgrade is complete, reboot the node machine.
$ systemctl reboot $ journalctl -r -u docker
Step 8 − Re-enable scheduling on nodes.
$ oadm manage-node <node> --schedulable.
Step 9 − Run the atomic-openshift executer to get the OpenShift package back on node.
$ atomic-openshift-excluder exclude
Step 10 − Finally, check if all the nodes are available.
$ oc get nodes NAME STATUS AGE master.example.com Ready 12d node1.example.com Ready 12d node2.example.com Ready 12d
”;