”;
OpenShift security is mainly a combination of two components that mainly handles security constraints.
- Security Context Constraints (SCC)
- Service Account
Security Context Constraints (SCC)
It is basically used for pod restriction, which means it defines the limitations for a pod, as in what actions it can perform and what all things it can access in the cluster.
OpenShift provides a set of predefined SCC that can be used, modified, and extended by the administrator.
$ oc get scc NAME PRIV CAPS HOSTDIR SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY anyuid false [] false MustRunAs RunAsAny RunAsAny RunAsAny 10 hostaccess false [] true MustRunAs MustRunAsRange RunAsAny RunAsAny <none> hostmount-anyuid false [] true MustRunAs RunAsAny RunAsAny RunAsAny <none> nonroot false [] false MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> privileged true [] true RunAsAny RunAsAny RunAsAny RunAsAny <none> restricted false [] false MustRunAs MustRunAsRange RunAsAny RunAsAny <none>
If one wishes to use any pre-defined scc, that can be done by simply adding the user or the group to the scc group.
$ oadm policy add-user-to-scc <scc_name> <user_name> $ oadm policy add-group-to-scc <scc_name> <group_name>
Service Account
Service accounts are basically used to control access to OpenShift master API, which gets called when a command or a request is fired from any of the master or node machine.
Any time an application or a process requires a capability that is not granted by the restricted SCC, you will have to create a specific service account and add the account to the respective SCC. However, if a SCC does not suit your requirement, then it is better to create a new SCC specific to your requirement rather than using the one that is a best fit. In the end, set it for the deployment configuration.
$ oc create serviceaccount Cadmin $ oc adm policy add-scc-to-user vipin -z Cadmin
Container Security
In OpenShift, security of containers is based on the concept of how secure the container platform is and where are the containers running. There are multiple things that come into picture when we talk about container security and what needs to be taken care of.
Image Provenance − A secure labeling system is in place that identifies exactly and incontrovertibly where the containers running in the production environment came from.
Security Scanning − An image scanner automatically checks all the images for known vulnerabilities.
Auditing − The production environment is regularly audited to ensure all containers are based on up-to-date containers, and both hosts and containers are securely configured.
Isolation and Least Privilege − Containers run with the minimum resources and privileges needed to function effectively. They are not able to unduly interfere with the host or other containers.
Runtime Threat Detection − A capability that detects active threats against containerized application in runtime and automatically responds to it.
Access Controls − Linux security modules, such as AppArmor or SELinux, are used to enforce access controls.
There are few key methods by which container security is archived.
- Controlling access via oAuth
- Via self-service web console
- By Certificates of platform
Controlling Access via OAuth
In this method, authentication to API control access is archived getting a secured token for authentication via OAuth servers, which comes inbuilt in OpenShift master machine. As an administrator, you have the capability to modify the configuration of OAuth server configuration.
For more details on OAuth server configuration, refer to Chapter 5 of this tutorial.
Via Self-Service Web Console
This web console security feature is inbuilt in OpenShift web console. This console ensures that all the teams working together do not have access to other environments without authentication. The multi-telnet master in OpenShift has the following security features −
- TCL layer is enabled
- Uses x.509 certificate for authentication
- Secures the etcd configuration on the master machine
By Certificates of Platform
In this method, certificates for each host is configured during installation via Ansible. As it uses HTTPS communication protocol via Rest API, we need TCL secured connection to different components and objects. These are pre-defined certificates, however, one can even have a custom certificate installed on the cluster of master for access. During the initial setup of the master, custom certificates can be configured by overriding the existing certificates using openshift_master_overwrite_named_certificates parameter.
Example
openshift_master_named_certificates = [{"certfile": "/path/on/host/to/master.crt", "keyfile": "/path/on/host/to/master.key", "cafile": "/path/on/host/to/mastercert.crt"}]
For more detail on how to generate custom certificates, visit the following link −
https://www.linux.com/learn/creating-self-signed-ssl-certificates-apache-linux
Network Security
In OpenShift, Software Defined Networking (SDN) is used for communication. Network namespace is used for each pod in the cluster, wherein each pod gets its own IP and a range of ports to get network traffic on it. By this method, one can isolate pods because of which it cannot communicate with pods in the other project.
Isolating a Project
This can be done by the cluster admin using the following oadm command from CLI.
$ oadm pod-network isolate-projects <project name 1> <project name 2>
This means that the projects defined above cannot communicate with other projects in the cluster.
Volume Security
Volume security clearly means securing the PV and PVC of projects in OpenShift cluster. There are mainly four sections to control access to volumes in OpenShift.
- Supplemental Groups
- fsGroup
- runAsUser
- seLinuxOptions
Supplemental Groups − Supplemental groups are regular Linux groups. When a process runs in the system, it runs with a user ID and group ID. These groups are used for controlling access to shared storage.
Check the NFS mount using the following command.
# showmount -e <nfs-server-ip-or-hostname> Export list for f21-nfs.vm: /opt/nfs *
Check NFS details on the mount server using the following command.
# cat /etc/exports /opt/nfs *(rw,sync,no_root_squash) ... # ls -lZ /opt/nfs -d drwxrws---. nfsnobody 2325 unconfined_u:object_r:usr_t:s0 /opt/nfs # id nfsnobody uid = 65534(nfsnobody) gid = 454265(nfsnobody) groups = 454265(nfsnobody)
The /opt/nfs/ export is accessible by UID 454265 and the group 2325.
apiVersion: v1 kind: Pod ... spec: containers: - name: ... volumeMounts: - name: nfs mountPath: /usr/share/... securityContext: supplementalGroups: [2325] volumes: - name: nfs nfs: server: <nfs_server_ip_or_host> path: /opt/nfs
fsGroup
fsGroup stands for the file system group which is used for adding container supplemental groups. Supplement group ID is used for shared storage and fsGroup is used for block storage.
kind: Pod spec: containers: - name: ... securityContext: fsGroup: 2325
runAsUser
runAsUser uses the user ID for communication. This is used in defining the container image in pod definition. A single ID user can be used in all containers, if required.
While running the container, the defined ID is matched with the owner ID on the export. If the specified ID is defined outside, then it becomes global to all the containers in the pod. If it is defined with a specific pod, then it becomes specific to a single container.
spec: containers: - name: ... securityContext: runAsUser: 454265
”;