Docker – Containers and Shells ”; Previous Next By default, when you launch a container, you will also use a shell command while launching the container as shown below. This is what we have seen in the earlier chapters when we were working with containers. In the above screenshot, you can observe that we have issued the following command − sudo docker run –it centos /bin/bash We used this command to create a new container and then used the Ctrl+P+Q command to exit out of the container. It ensures that the container still exists even after we exit from the container. We can verify that the container still exists with the Docker ps command. If we had to exit out of the container directly, then the container itself would be destroyed. Now there is an easier way to attach to containers and exit them cleanly without the need of destroying them. One way of achieving this is by using the nsenter command. Before we run the nsenter command, you need to first install the nsenter image. It can be done by using the following command − docker run –rm -v /usr/local/bin:/target jpetazzo/nsenter Before we use the nsenter command, we need to get the Process ID of the container, because this is required by the nsenter command. We can get the Process ID via the Docker inspect command and filtering it via the Pid. As seen in the above screenshot, we have first used the docker ps command to see the running containers. We can see that there is one running container with the ID of ef42a4c5e663. We then use the Docker inspect command to inspect the configuration of this container and then use the grep command to just filter the Process ID. And from the output, we can see that the Process ID is 2978. Now that we have the process ID, we can proceed forward and use the nsenter command to attach to the Docker container. nsenter This method allows one to attach to a container without exiting the container. Syntax nsenter –m –u –n –p –i –t containerID command Options -u is used to mention the Uts namespace -m is used to mention the mount namespace -n is used to mention the network namespace -p is used to mention the process namespace -i s to make the container run in interactive mode. -t is used to connect the I/O streams of the container to the host OS. containerID − This is the ID of the container. Command − This is the command to run within the container. Return Value None Example sudo nsenter –m –u –n –p –i –t 2978 /bin/bash Output From the output, we can observe the following points − The prompt changes to the bash shell directly when we issue the nsenter command. We then issue the exit command. Now normally if you did not use the nsenter command, the container would be destroyed. But you would notice that when we run the nsenter command, the container is still up and running. Print Page Previous Next Advertisements ”;
Category: docker
Docker – Container & Hosts
Docker – Container and Hosts ”; Previous Next The good thing about the Docker engine is that it is designed to work on various operating systems. We have already seen the installation on Windows and seen all the Docker commands on Linux systems. Now let’s see the various Docker commands on the Windows OS. Docker Images Let’s run the Docker images command on the Windows host. From here, we can see that we have two images − ubuntu and hello-world. Running a Container Now let’s run a container in the Windows Docker host. We can see that by running the container, we can now run the Ubuntu container on a Windows host. Listing All Containers Let’s list all the containers on the Windows host. Stopping a Container Let’s now stop a running container on the Windows host. So you can see that the Docker engine is pretty consistent when it comes to different Docker hosts and it works on Windows in the same way it works on Linux. Print Page Previous Next Advertisements ”;
Docker – Overview
Docker – Overview ”; Previous Next Currently, Docker accounts for over 32 percent market share of the containerization technologies market, and this number is only expected to grow. In general, any containerization software allows you to run without launching an entire virtual machine. Docker makes repetitive and time-consuming configuration tasks redundant. This allows for quick and efficient development of applications both on desktop and cloud environments. However, to get comfortable with Docker, it’s important to get a clear understanding of its underlying architecture and other underpinnings. In this chapter, let’s explore the overview of Docker and understand how various components of Docker work and interact with each other. What is Docker? Docker is an open-source platform for developing, delivering, and running applications. It makes it easier to detach applications from infrastructure, which guarantees quick software delivery. Docker shortens the time between code creation and deployment by coordinating infrastructure management with application processing. Applications are packaged and run inside what are known as containers which are loosely isolated environments in the Docker ecosystem. Because of this isolation, more containers can run concurrently on a single host, improving security. As they are lightweight, containers eliminate the need for host setups by encapsulating all requirements for application execution. Since containers are constant across shared environments, collaboration is smooth. Docker provides us with comprehensive tooling and a platform for managing the container lifecycle − You can develop applications and support their components using containers. You can use containers as the distribution and testing unit for all your applications. Docker allows you to deploy applications into all environments seamlessly and consistently, whether on local data centers, cloud platforms, or hybrid infrastructures. Why is Docker Used? Rapid Application Development and Delivery Docker speeds up application development cycles by providing standardized environments in the form of local containers. These containers are integral to CI/CD workflows and they ensure fast and consistent application delivery. Consider the following example scenario − The developers in your team write programs in their local system. They can share their work with their teammates using Docker containers. Then, they can use Docker to deploy their applications into a test environment where they can run automated or manual tests. If a bug is found, they can fix it in the development environment, verify the build, and redeploy it to the test environment for further testing. After the testing is done, deploying the application to the production environment and getting the feature to the customer is as simple as pushing the updated image to the production environment. Responsive Deployment and Scaling Since Docker is a container-based platform, it facilitates highly portable workloads. This allows you to run applications seamlessly across various environments. Its portability and lightweight nature allow for dynamic workload management. Subsequently, businesses can scale applications in real time as per demand. Maximizing Hardware Utilization Docker is a cost-effective alternative to traditional virtual machines. This enables higher server capacity utilization. It allows you to create high-density environments and perform smaller deployments. This allows businesses to achieve more with limited resources. Docker Containers vs Virtual Machines Virtual machines (VMs) and Docker containers are two widely used technologies in modern computing environments, although they have different uses and benefits. Making an informed choice on which technology to choose for a given use case requires an understanding of their differences. Architecture Docker Containers − Docker containers are lightweight and portable, and they share the host OS kernel. They run on top of the host OS and encapsulate the application and its dependencies. Virtual Machines − On the other hand, Virtual Machines imitate full-fledged hardware, including the guest OS, on top of a hypervisor. Each VM runs its own OS instance which is independent of the host OS. Resource Efficiency Docker Containers − In terms of resource utilization, Docker Containers are highly efficient since they share the host OS kernel and require fewer resources compared to VMs. Virtual Machines − VMs consume more resources since they need to imitate an entire operating system, including memory, disk space, and CPU. Isolation Docker Containers − Containers provide process-level isolation. This means that they share the same OS kernel but have separate filesystems and networking. This is achieved through namespaces and control groups. Virtual Machines − Comparatively, VMs offer stronger isolation since each VM runs its kernel and has its dedicated resources. Hence, VMs are more secure but also heavier. Portability Docker Containers − As long as Docker is installed in an environment, Containers can run consistently across different environments, development or production. This makes them highly portable. Virtual Machines − VMs are less flexible compared to containers due to differences in underlying hardware and hypervisor configurations. However, they can be portable to some extent through disk images. Startup Time Docker Containers − Containers spin up almost instantly since they utilize the host OS kernel. Hence, they are best suitable for microservices architectures and rapid scaling. Virtual Machines − VMs typically take longer to start because they need to boot an entire OS. This results in slower startup times compared to containers. Use Cases Docker Containers − Docker Containers are best suited for microservices architectures, CI/CD pipelines, and applications that require rapid deployment and scaling. Virtual Machines − VMs are preferred for running legacy applications that have strict security requirements where strong isolation is necessary. Docker Architecture Docker uses a client-server architecture. The Docker client communicates with the Docker daemon, which builds, manages, and distributes your Docker containers. The Docker daemon does all the heavy lifting. A Docker client can also be connected to a remote Docker daemon, or the daemon and client can operate on the same machine. They communicate over a REST API,