Docker – Setting ASP.Net ASP.Net is the standard web development framework that is provided by Microsoft for developing server-side applications. Since ASP.Net has been around for quite a long time for development, Docker has ensured that it has support for ASP.Net. In this chapter, we will see the various steps for getting the Docker container for ASP.Net up and running. Prerequisites The following steps need to be carried out first for running ASP.Net. Step 1 − Since this can only run on Windows systems, you first need to ensure that you have either Windows 10 or Window Server 2016. Step 2 − Next, ensure that Hyper-V is and Containers are installed on the Windows system. To install Hyper–V and Containers, you can go to Turn Windows Features ON or OFF. Then ensure the Hyper-V option and Containers is checked and click the OK button. The system might require a restart after this operation. Step 3 − Next, you need to use the following Powershell command to install the 1.13.0rc4 version of Docker. The following command will download this and store it in the temp location. Invoke-WebRequest “https://test.docker.com/builds/Windows/x86_64/docker-1.13.0- rc4.zip” -OutFile “$env:TEMPdocker-1.13.0-rc4.zip” –UseBasicParsing Step 4 − Next, you need to expand the archive using the following powershell command. Expand-Archive -Path “$env:TEMPdocker-1.13.0-rc4.zip” -DestinationPath $env:ProgramFiles Step 5 − Next, you need to add the Docker Files to the environment variable using the following powershell command. $env:path += “;$env:ProgramFilesDocker” Step 6 − Next, you need to register the Docker Daemon Service using the following powershell command. dockerd –register-service Step 7 − Finally, you can start the docker daemon using the following command. Start-Service Docker Use the docker version command in powershell to verify that the docker daemon is working Installing the ASP.Net Container Let’s see how to install the ASP.Net container. Step 1 − The first step is to pull the image from Docker Hub. When you log into Docker Hub, you will be able to search and see the image for Microsoft/aspnet as shown below. Just type in asp in the search box and click on the Microsoft/aspnet link which comes up in the search results. Step 2 − You will see that the Docker pull command for ASP.Net in the details of the repository in Docker Hub. Step 3 − Go to Docker Host and run the Docker pull command for the microsoft/aspnet image. Note that the image is pretty large, somewhere close to 4.2 GB. Step 4 − Now go to the following location and download the entire Git repository. Step 5 − Create a folder called App in your C drive. Then copy the contents from the 4.6.2/sample folder to your C drive. Go the Docker File in the sample directory and issue the following command − docker build –t aspnet-site-new –build-arg site_root=/ The following points need to be noted about the above command − It builds a new image called aspnet-site-new from the Docker File. The root path is set to the localpath folder. Step 6 − Now it’s time to run the container. It can be done using the following command − docker run –d –p 8000:80 –name my-running-site-new aspnet-site-new Step 7 − You will now have IIS running in the Docker container. To find the IP Address of the Docker container, you can issue the Docker inspect command as shown below.
Category: docker
Docker – Data Storage By design, data should not generally be persisted directly in a Docker container for a few reasons. First, containers were always intended to be transient. In other words, they can be stopped, started, or, in theory, destroyed at any time. Data that are stored inside a container is consequently lost each time a container stops existing. With that said, this makes data persistence and recovering your data hard. Second, the writable layer of a container can be heavily coordinated with the host machine on which it is running, often making it hard to move it to another machine or to extract data. Furthermore, the writing in this layer is usually performed using a storage driver and a union file system, which may cause performance overhead compared to the writing of the host”s file system. Data can be stored within a container, too. This can lead to problems with scaling and sharing, as more than one container may wish to access the same data, making management and keeping the said data in synchronization complex. That is why it is much better to use the Docker volumes or bind mounts for storing data out of the container, which will provide persistence, portability, and easy access. In this chapter, let’s discuss on how volumes and bind mounts can be used to persist data in Docker containers. Different Ways to Persist Data in Docker Containers Whether you use mount types volume, bind mount, or tmpfs, the data inside the container is presented as a directory or file within the container”s filesystem. Here is the crucial difference: the location on the Docker host where the persistent data resides. Volumes live in a Docker-managed part of the host filesystem, usually at /var/lib/docker/volumes/ on Linux. This area is not accessible to natively running Docker processes, so volumes are the only applicable mechanism for holding data persistently in Docker. Bind mounts, on the other hand, can be located anywhere in a host system, even some crucial system files, and therefore, can be changed by a process not managed by Docker. This makes them more flexible but less isolated. Finally, tmpfs mounts exist only in the host system”s memory and never touch the underlying filesystem – perfect for ephemeral, non-persistent data. The -v or –volume flag allows specifying a mounting point for volumes or bind mounts. The syntax is slightly different: use the –tmpfs flag for tmpfs mounts. But for maximum readability and clarity, whenever possible, use –mount with all the options merged and nested inside. Docker Volumes Volumes are the preferred way for persisting data generated by and used in Docker containers. Docker manages them and is independent of whatever the host machine”s filesystem is. There are also several benefits to using them over other storage strategies like bind mounts. Key Features of Docker Volumes Persistence − Data stored in volumes will outlive the lifecycle of a stopped, removed, or replaced container. Portability − It”s easy to backup, migrate, or share among multiple containers with volumes. Management − Control and manage Docker volumes with Docker CLI commands or via the Docker API. Cross-platform compatibility − Runs on Linux and Windows containers with remarkable consistency. Performance − Volumes have more optimal performance with Docker Desktop than bind mounts from Mac and Windows hosts. Creating a Volume This is the basic command to create a new volume with the name “my-vol.” $ docker volume create my-vol Attach a Volume to a Container The below command attaches the “my-vol” volume to the “/app/data” directory within the container. If any data is written to this directory, it will be stored in the volume persistently. $ docker run -d –name my-container -v my-vol:/app/data my-image Listing Volumes This command lists all the volumes that are available in your Docker environment. $ docker volume ls Inspecting a Volume This command gives detailed information about the volume, including the mount point, driver, and other details. $ docker volume inspect my-vol Removing a Volume This command removes the “my-vol” volume. Warning: The data in the volume is destroyed irreversibly. $ docker volume rm my-vol Real-World Use Cases of Docker Volumes Databases − The database files of the data should be stored in a volume that will make it persistent across all container restarts. Web Server Content − Storing website files or user uploads within a volume, so even when the web server container is replaced, they remain accessible. Application Logs − Store logs in a volume for easy analysis and persistence. Docker volumes bring strong and flexible management of persistent data inside contained applications. Data remains secured and accessible even with the leverage of volumes in dynamic container environments. Bind Mounts Bind mounting in Docker is a way to directly share files or directories from the host machine into a Docker application. Bind mounts directly associate a file or directory from the host machine to a path in the container; unlike volumes, they do not need to be managed since Docker manages them. Key Features of Mount Bind Direct Access − Any changes made to the files on the host are immediately reflected within the container, and vice versa. Flexibility − You can mount any location on your host system, including system files, configuration files, or your project”s source code. Development Workflow − In development, bind mounts prove to be a boon for you, as you can edit code on your host drive, and the changes taking place in the running container are seen close to immediately. Mount Host Directory The below command mounts the current directory on your machine to the container”s ”/app” directory. Any changes to the files inside the current directory will reflect inside the container and vice versa. $ docker run -d –name my-container -v $(pwd):/app my-image Mount a Single File This would mount the host file “file.txt” to the path “/etc/config.txt” in the container. $ docker run -d –name my-container -v /path/to/file.txt:/etc/config.txt my-image Using the –mount Flag The –mount flag allows for more verbose specification on a
Docker – Toolbox In the introductory chapters, we have seen the installation of Docker toolbox on Windows. The Docker toolbox is developed so that Docker containers can be run on Windows and MacOS. The site for toolbox on Windows is For Windows, you need to have Windows 10 or Windows Server 2016 with Hyper-V enabled. The toolbox consists of the following components − Docker Engine − This is used as the base engine or Docker daemon that is used to run Docker containers. Docker Machine − for running Docker machine commands. Docker Compose for running Docker compose commands. Kinematic − This is the Docker GUI built for Windows and Mac OS. Oracle virtualbox Let’s now discuss the different types of activities that are possible with Docker toolbox. Running in Powershell With Docker toolbox on Windows 10, you can now run Docker commands off powershell. If you open powershell on Windows and type in the command of Docker version, you will get all the required details about the Docker version installed. Pulling Images and Running Containers You can also now pull Images from Docker Hub and run containers in powershell as you would do in Linux. The following example will show in brief the downloading of the Ubuntu image and running of the container off the image. The first step is to use the Docker pull command to pull the Ubuntu image from Docker Hub. The next step is to run the Docker image using the following run command − docker run –it ubuntu /bin/bash You will notice that the command is the same as it was in Linux. Kitematic This is the GUI equivalent of Docker on Windows. To open this GUI, go to the taskbar and on the Docker icon, right-click and choose to open Kitematic. It will prompt you to download Kitematic GUI. Once downloaded, just unzip the contents. There will be a file called Kitematic.exe. Double-click this exe file to open the GUI interface. You will then be requested to log into Docker Hub, enter through the GUI. Just enter the required username and password and then click the Login button. Once logged in, you will be able to see all the images downloaded on the system on the left-hand side of the interface. On the right-hand side, you will find all the images available on Docker Hub. Let’s take an example to understand how to download the Node image from Docker Hub using Kitematic. Step 1 − Enter the keyword of node in the search criteria. Step 2 − Click the create button on official Node image. You will then see the image being downloaded. Once the image has been downloaded, it will then start running the Node container. Step 3 − If you go to the settings tab, you can drill-down to further settings options, as shown below. General settings − In this tab, you can name the container, change the path settings, and delete the container. Ports − Here you can see the different port mappings. If you want, you can create your own port mappings. Volumes − Here you can see the different volume mappings. Advanced − It contains the advanced settings for the container.
Docker – Security Docker security is crucial to ensure that the containerized application remains fully functional and reliable. One of the primary concerns associated with Docker security is effective container isolation not to let malicious activities propagate. Docker utilizes Linux kernel namespaces and groups or control groups to isolate processes and resources. By establishing namespaces, each container can possess an exclusive environment and not have direct access to the resources held by other containers. On the other hand, groups limit resources that a container can consume, be it the CPU, memory, disk I/O, etc. to keep distribution fair and the system from getting exhausted. Just using these features of the Linux kernel, Docker maintains an excellent baseline security model, thus helping to mitigate common threats. It is the image management and proper handling of the Docker daemon that becomes another critical area in ensuring Docker security. This includes ensuring that images come from trusted repositories and are regularly scanned for vulnerabilities, a means of not deploying compromised containers. Exemplary tools that assist in the verification of images for both integrity and security are Docker Content Trust and Image scanning services. There are four major areas to consider when reviewing Docker security − The intrinsic security of the kernel and support for namespaces and cgroups The attack surface that is exposed by the Docker daemon alone Loopholes in container configuration profiles either by default or when the user customizes them. The “hardening” security features of the kernel and how the security features interact with containers. Let’s discuss more aspects of Docker Container Security in this chapter. Kernel Namespaces Namespaces are basically what Docker uses to run the containers isolated. The namespaces partition kernel resources whereby a set of processes sees one set of resources, and another set of processes sees a different set of resources. Docker uses the following kinds of namespaces − PID Namespace − It isolates the process IDs, which means a process ID inside a container will be different from that on the host. Mount Namespace − This isolates mount points in the file system to ensure the isolation of file systems seen inside the container from those on the host. Network Namespace − Isolates the networking by interfaces, IP addresses, and routing tables. UTS Namespace − Isolates kernel and version identifiers. IPC Namespace − Isolates the IPC resources to message queues, semaphores, and shared memory, among others. User Namespace − Isolates user and group IDs, allowing the container to run as non-root inside the container but map to root on the host. Docker achieves the isolation of containers from each other and the host by using these namespaces. Control Groups Another essential safety feature of Docker to provide resource isolation and management is Control groups. Cgroups control the amount of system resources a container can consume, preventing one single container from exhausting system resources and leeching on the host”s system to other containers. Some examples of crucial resource controls offered by groups are − CPU − Shares the CPU with the container and sets the container”s CPU usage limit. Memory − Constrains the memory usage by a container, and also the swap memory – to prevent a container from utilizing more memory than what is allocated. Disk I/O − Determines how quickly the program does reading and writing to a disk. Network − Manages bandwidth allocation for the network. Docker equally distributes resources among containers, avoiding resource contention, and thus enhancing overall system stability and security. Docker Daemon Attack Surface The Docker daemon runs with root-level privileges, a serious security concern. An attacker can compromise it and gain control over the whole host system. To reduce the attack surface, best practices must be followed − Limit Access − Allow access to the Docker daemon only for specific users while controlling those who can run Docker commands – restricted access to secured communication with Unix socket permissions or TCP with TLS. Use Rootless Mode − Use Docker”s rootless mode as much as possible; with this mode, daemon and containers run without root privilege. It is regarded as one way to reduce the possibility of privilege escalation. Network Security − Ensure that the Docker daemon API is not exposed on the public Internet. If Docker daemon API access is required from remote places, secure it through firewall rules and VPN. Regular Updates − Keep Docker and the base OS updated to safeguard against any identified vulnerability. Capabilities of the Linux Kernel Linux kernel capabilities provide fine-grained controls over the privileges given to processes. Docker uses the capabilities to reduce the number of privileges a container can be allocated, where a container is historically allocated only the necessary capabilities for its operation. The following capabilities are in use − CAP_NET_BIND_SERVICE − Allow binding to ports below 1024. CAP_SYS_ADMIN − This capability allows many different system administration operations. CAP_SYS_PTRACE − This capability allows a process to trace other processes. Docker drops many capabilities by default to lessen the potential for privilege escalation. Users can use the `–cap-add` and `–cap-drop` options to, respectively, add back in or drop additional specific capabilities upon launching containers, enabling fine-tuning of the security profile to the particular needs of their applications. Docker Content Trust Signature Verification Docker Content Trust (DCT) provides image signing and verification. This guarantees the images are not tampered with and come from whomever they appear to be. When DCT is enabled, Docker will check the digital signatures of the images before pulling or running, ensuring the usage of only trusted images. Here are some of the key features that make DCT such an essential part of your secure supply chain − Image Signing − The developers can sign images using their private keys. Signature Validation − Docker verifies these signatures against the public keys associated with the attached to ensure that the image is either unchanged or tamper-free. Immutable Tags − Protect against accidental overwrites of signed images. Enabling DCT supports the added security of the overall system, whereby only verified
Docker – Dockerfile A Dockerfile is a text document in which you can lay down all the instructions that you want for an image to be created. The first entry in the file specifies the base image, which is a pre-made image containing all the dependencies you need for your application. Then, there are commands you can send to the Dockerfile to install additional software, copy files, or run scripts. The result is a Docker image: a self-sufficient, executable file with all the information needed to run an application. Dockerfiles are a compelling way to create and deploy applications. They help in creating an environment consistently reproducibly, and in an easier way. Dockerfiles also automate the deployment process. A Dockerfile is used to create new custom images prepared individually according to specific needs. For instance, a Docker image can have a particular version of a web server or, for example, a database server. Important Instructions used in Dockerfile A Dockerfile is a text document that includes all the different steps and instructions on how to build a Docker image. The main elements described in the Dockerfile are the base image, required dependencies, and commands to execute application deployment within a container. The essential instructions of a Dockerfile are illustrated below − FROM This instruction sets the base image on which the new image is going to be built upon. It is usually the first instruction in a Dockerfile. FROM ubuntu:22 RUN This will be an instruction that will be executed for running the commands inside the container while building. It typically can be utilized to install an application, update libraries, or do general setup. RUN apt-get update && apt-get install -y python3 COPY This instruction copies files and directories from the host machine into the container image. COPY ./app /app ADD Like COPY but more advanced in features like it auto-decompresses archives and fetches files from URLs. ADD https://example.com/file.tar.gz /app WORKDIR The instruction sets the working directory where the subsequent commands in a Dockerfile will be executed. WORKDIR /app ENV The ENV instruction in this command defines environment variables within the container. ENV FLASK_APP=main.py EXPOSE This option defines to Docker that the container listens on the declared network ports at runtime. EXPOSE 8000 CMD Defines defaults for an executing container. There can only be one CMD instruction in a Dockerfile. If you list more than one CMD, then only the last CMD will take effect. CMD [“python3”, “main.py”] ENTRYPOINT This is an instruction that enables the configuration of a container to run the container as an executable. ENTRYPOINT [“python3”, “main.py”] LABEL This command provides meta-information for an image, like details of the maintainer, version, or description. LABEL maintainer=”[email protected]” ARG This command defines a variable that allows users to be passed to the builder at build time using the “–build-arg” flag on the docker build command. ARG version=1 VOLUME It simply creates a mount point and assigns the given name to it, indicating that it will hold externally mounted volumes from the native host or other containers. VOLUME /app/data USER This instruction allows the setting of the username (or UID) and optionally the group (or GID) to be used when running that image and for any RUN, CMD, and ENTRYPOINT instructions that follow it in the Dockerfile. USER johndoe These are probably the most common and vital instructions used in a Dockerfile. However, the instructions and their order would, of course, vary according to the specific application to be containerized. Best Practices for Dockerfile A nicely put Dockerfile is central to all efficient and secure containerized applications. A Dockerfile is a blueprint for building Docker images and details the environment, dependencies, and configurations needed to operate your application smoothly. Through best practices, you can create leaner, faster, and more reliable Docker images, which eventually automate workflows in development and increase efficiency in the application. Given below is a set of 10 fundamental Dockerfile best practices − Use Official Base Images − Build on top of the official Docker Hub images. They tend to be minimal and well-maintained. Usually, they are optimized for security and size, laying a solid foundation for a custom image. Use multi-stage builds to slash your final image size by dropping unwanted build tools and dependencies. This way, you partition the build and runtime environment to attain peak efficiencies. Minimize the Number of Layers − As you learned earlier, each instruction in a Dockerfile creates a layer. Whenever possible, combine any commands related to one another in a single RUN instruction. This will help reduce the number of layers created for any build, making builds more cacheable. Leverage Build Cache − Ensure that Dockerfile instructions that can change more frequently, such as COPY, are placed towards the end. This would enable building again more rapidly upon making changes at later stages. Install Only Necessary Packages − Install only necessary packages and dependencies in your application to reduce the image size and possible vulnerabilities there. Use ”.dockerignore” − To exclude unnecessary files and directories from the build context, add a ”.dockerignore” file. This will speed up builds and prevent sensitive information from being leaked into your image. Use Non-Root User − Run containers with a non-root user to enhance security. It is always a good idea to give a specific user and group in Dockerfile another isolation layer. Image Scanning − Scan your Docker images often for vulnerabilities. With technologies such as Trivy and Clair, there are several tools that you could use for this kind of scanning. Keep your base images and dependencies up to date at all times to minimize the potential risk. Document your Dockerfile − Comment and explain your Dockerfile, you”ll thank yourself later. This helps others, even your future self, understand the build process. Pin Versions − Pin versions for base images and dependencies, as this ensures reproducibility and avoids any unintended issues by getting updated. You can now optimize your container builds for speed, security, and maintainability by creating robust and efficient
Docker – Kubernetes Architecture Kubernetes is an orchestration framework for Docker containers which helps expose containers as services to the outside world. For example, you can have two services − One service would contain nginx and mongoDB, and another service would contain nginx and redis. Each service can have an IP or service point which can be connected by other applications. Kubernetes is then used to manage these services. The following diagram shows in a simplistic format how Kubernetes works from an architecture point of view. The minion is the node on which all the services run. You can have many minions running at one point in time. Each minion will host one or more POD. Each POD is like hosting a service. Each POD then contains the Docker containers. Each POD can host a different set of Docker containers. The proxy is then used to control the exposing of these services to the outside world. Kubernetes has several components in its architecture. The role of each component is explained below &mius; etcd − This component is a highly available key-value store that is used for storing shared configuration and service discovery. Here the various applications will be able to connect to the services via the discovery service. Flannel − This is a backend network which is required for the containers. kube-apiserver − This is an API which can be used to orchestrate the Docker containers. kube-controller-manager − This is used to control the Kubernetes services. kube-scheduler − This is used to schedule the containers on hosts. Kubelet − This is used to control the launching of containers via manifest files. kube-proxy − This is used to provide network proxy services to the outside world.
Docker – Image Layering and Caching Docker image layers are fundamental components of the Docker architecture, serving as the building blocks for Docker images. As a read-only layer that adds to the final image, each image layer represents a distinct instruction from a Dockerfile. Following a base layer – typically an operating system like Ubuntu – further layers are added to the process. Application code, environment settings, and software installations are examples of these layers. In order to maintain isolation and immutability between each layer and enable them to stack and appear as a single file system, Docker employs a union file system. The efficiency and reusability benefits of layering are substantial. Docker ensures that common layers shared by various images are reused through layer caching, which reduces build time and storage requirements. Additionally, because of this layer caching, image distribution is made more efficient, as only the only newly added layers need to be transferred during updates. Furthermore, layers” immutability ensures that once a layer is created, it never changes, simplifying version control and guaranteeing consistency across various environments. Components of Docker Image Layers Every layer in a Docker image represents a set of instructions taken from the Dockerfile. These layers are divided into three groups: base, intermediate, and top layers. Each group has a specific function in the process of creating an image. Base Layer The minimal operating system or runtime environment required to support the application is usually found in the base layer, which forms the basis of a Docker image. The majority of the time, it is created from an already-existing image, like node, alpine, or Linux. Since it establishes the framework for all upcoming layers to function in, this layer is essential. To provide a standardized starting point, the base layer frequently contains necessary libraries and dependencies shared by numerous applications. It is possible for developers to simplify the development and deployment process across various environments by ensuring that their applications have a dependable and consistent base image. Intermediate Layer The layers that are added on top of the base layer are called intermediate layers. A single Dockerfile instruction, such as RUN, COPY, or ADD, is correlated with each intermediate layer. Certain application dependencies, configuration files, and other essential elements that supplement the base layer are included in these layers. Installing software packages, transferring source code into the image, or configuring environment variables are a few examples of tasks that could be done in an intermediate layer. The application environment must be gradually built up, and this requires intermediate layers. Since each layer is immutable, adding or modifying one causes the creation of new layers rather than changes to already existing ones. Because each layer is immutable, efficiency is increased and redundancy is decreased as each layer is consistent and reusable across various images. Top Layer The last layer in the Docker image is the top layer, also known as the application layer. This layer contains the actual code for the application as well as any last-minute setups required for it to function. The base environment and the small adjustments made by the intermediate layers are combined to create a finished and executable application in the top layer, which is the result of all the work done by the layers that came before it. To differentiate one image from another, the top layer is unique to the containerized application. It is the contents of this top layer that are most directly interacted with during runtime when the image is executed to create a container. What are Cache Layers in Docker Images? In order to maximize and expedite the creation of Docker images, cache layers are an essential component of the image build process in Docker. They are designed to reuse previously built layers whenever possible. Reducing the amount of time and computational power needed to create Docker images on a regular basis and improving efficiency are made possible by this mechanism. Docker executes every command in the Dockerfile one after the other when you build a Docker image. Docker verifies that an instruction has never been executed with the same context before for each one. If so, Docker doesn”t need to create a new layer – it can reuse the one that was already created. “Layer caching” is the term for this procedure. The build process can be accelerated considerably by using Docker to skip steps that haven”t changed because the cache layers contain all intermediate layers created during the build process. How do Cache Layers Work? Instruction Matching − Docker searches for a cached layer that matches each instruction in the Dockerfile after evaluating each one. The context—such as the files included in a COPY instruction or the precise command in a RUN instruction—and the instruction itself determine whether two things match. Layer Reuse − Docker reuses the current layer rather than building a new one if it discovers a match in its cache. As a result, Docker avoids repeating the instruction, saving both time and resources. Cache invalidation − It is the process of invalidating an instruction when its context changes. Docker will have to rebuild the layer and all subsequent layers, for instance, if a file used in a COPY instruction is changed and there isn”t a matching cached layer found. Benefits of Cache Layers Build Speed − The shorter build time seems to be the main advantage. Docker can expedite the build process considerably by reusing existing layers, particularly for large images with numerous layers. Resource Efficiency − Reusing layers minimizes the amount of data that needs to be processed and stored and conserves computational resources. Consistency − By reusing layers that have already been tested and validated, cache layers guarantee consistent builds and lower the risk of introducing new errors during rebuilds. Cache Layers: Limitations and Considerations While cache layers provide many benefits, they also have some limitations − Cache Size − The cache can take up a lot of disk space, and it can be difficult to manage
Docker – Building Files We created our Docker File in the last chapter. It’s now time to build the Docker File. The Docker File can be built with the following command − docker build Let’s learn more about this command. docker build This method allows the users to build their own Docker images. Syntax docker build -t ImageName:TagName dir Options -t − is to mention a tag to the image ImageName − This is the name you want to give to your image. TagName − This is the tag you want to give to your image. Dir − The directory where the Docker File is present. Return Value None Example sudo docker build –t myimage:0.1. Here, myimage is the name we are giving to the Image and 0.1 is the tag number we are giving to our image. Since the Docker File is in the present working directory, we used “.” at the end of the command to signify the present working directory. Output From the output, you will first see that the Ubuntu Image will be downloaded from Docker Hub, because there is no image available locally on the machine. Finally, when the build is complete, all the necessary commands would have run on the image. You will then see the successfully built message and the ID of the new Image. When you run the Docker images command, you would then be able to see your new image. You can now build containers from your new Image.
Docker – Architecture One of the most difficult tasks for DevOps and SRE teams is figuring out how to manage all application dependencies and technology stacks across many cloud and development environments. To do this, their processes often include keeping the application working regardless of where it runs, usually without changing much of its code. Docker helps all engineers to be more efficient and decrease operational overheads, allowing any developer in any development environment to create robust and reliable apps. Docker is an open platform for building, shipping, and running software programs. Docker allows you to decouple your applications from your infrastructure, making it possible to release software quickly. Docker allows you to manage your infrastructure in the same manner you do your applications. Using Docker”s methodology for shipping, testing, and deploying code can drastically cut the time between producing code and operating it in production. Docker uses a client-server architecture. The Docker client communicates with the docker daemon, which does the heavy work of creation, execution, and distribution of docker containers. The Docker client operates alongside the daemon on the same host, or we can link the Docker client and daemon remotely. The docker client and daemon communicate via REST API over a UNIX socket or a network. In this chapter, let”s discuss the Docker architecture in detail. Difference between Containers and Virtual Machines A Virtual Machine (VM) exists to accomplish tasks that would be risky if performed directly on the host environment. VMs are segregated from the rest of the system, so the software within the virtual machine cannot interfere with the host computer. A virtual machine is a computer file or software, commonly referred to as a guest, or an image produced within a computing environment known as the host. A virtual machine may execute apps and programs as if they were on a separate computer, making it excellent for testing other operating systems such as beta versions, creating operating system backups, and installing software and applications. A host can have multiple virtual machines running at the same time. A virtual machine contains several essential files including a logfile, an NVRAM setting file, a virtual disk file, and a configuration file. Server virtualization is another area where virtual machines can be extremely useful. Server virtualization divides a physical server into numerous isolated and unique servers, allowing each to execute its operating system independently. Each virtual machine has its virtual hardware, including CPUs, RAM, network ports, hard drives, and other components. On the other hand, Docker is a software development tool and virtualization technology that allows you to easily create, deploy, and manage programs utilizing containers. A container is a lightweight, standalone executable bundle of software that includes all of the libraries, configuration files, dependencies, and other components required to run the application. In other words, programs execute the same way regardless of where they are or what computer they are running on since the container offers an environment for the application throughout its software development life cycle. Because containers are separated, they offer security, allowing numerous containers to run concurrently on the same host. Furthermore, containers are lightweight because they do not require the additional load of a hypervisor. A hypervisor is a guest operating system similar to VMware or VirtualBox, but containers run directly within the host machine kernel. Should I Choose Docker or a Virtual Machine (VM)? It would be unfair to compare Docker and virtual machines because they are intended for different purposes. Docker is undoubtedly gaining popularity, but it cannot be considered to be a replacement for virtual machines. Despite Docker”s popularity, a virtual machine is a superior option in some circumstances. Virtual machines are preferred over Docker containers in a production environment because they run their operating system and pose no threat to the host computer. However, for testing purposes, Docker is the best option because it gives several OS platforms for the complete testing of software or applications. Additionally, a Docker container also employs a Docker engine rather than a hypervisor, like in a virtual machine. Since the host kernel is not shared, employing docker-engine makes containers compact, isolated, compatible, high-performance-intensive, and quick to respond. Docker containers offer little overhead since they can share a single kernel and application libraries. Organizations primarily adopt the hybrid method since the decision between virtual machines and Docker containers is determined by the type of workload delivered. Furthermore, only a few digital business organizations rely on virtual machines as their primary choice, opting to use containers because deployment is time-consuming, and running microservices is one of the biggest obstacles it faces. However, some businesses prefer virtual machines to Dockers, primarily those who want enterprise-grade security for their infrastructure. Components of Docker Architecture The key components of a Docker architecture are: the Docker Engine, the Docker Registries, and the Docker Objects (Images, Containers, Network, Storage) Let’s discuss each of them to get a better understanding of how different components of the Docker architecture interact with each other. Docker Engine Docker Engine is the foundation of the Docker platform, facilitating all elements of the container lifecycle. It consists of three basic components: a command-line interface, a REST API, and a daemon (which handles the job). The Docker daemon, commonly known as ”dockerd”, continually listens to Docker API requests. It is used to perform all of the heavy activities, such as creating and managing Docker objects like containers, volumes, images, and networks. A Docker daemon can communicate with other daemons on the same or separate host machines. For example, in a swarm cluster, the host machine”s daemon can connect with daemons on other nodes to complete tasks. The Docker API allows applications to control the Docker Engine. They can use it to look up details on containers or images, manage or upload images, and take actions such as creating new containers. This function can be attained using the HTTP client web service. Docker Registries Docker registries are storage facilities or services that enable you to store and
Docker – Daemon Configuration Docker Daemon, commonly referred to as “dockerd“, is the core element of the Docker platform. It is in charge of overseeing Docker objects like volumes, networks, images, and containers. It handles requests from the Docker client and other Docker components while running continuously in the background on a host computer. With features like resource isolation, networking, and container lifecycle management, the daemon is indispensable for developing, executing, and managing containerized apps with Docker. Among the main responsibilities of the Docker Daemon is to carry out user commands sent through the Docker client, converting them into actions like pulling container images from registries, creating and maintaining containers, and networking with multiple containers. It also controls how containers communicate with the host system, which guarantees effective resource use and isolation. The daemon is the central component of the Docker ecosystem, abstracting away the complexity of containerization so that developers and system administrators can concentrate on easily creating and deploying applications. Key Components of Docker Daemon The Docker daemon comprises several key components that work together to enable containerization − Docker Engine It is the central component of the platform, handling the tasks of creating, executing, and overseeing containers. It is made up of several smaller parts − Containerd − Manages the lifecycle of containers, including their creation, run, pause, and stop. Runc − Complies with OCI (Open Container Initiative) specifications to implement the container runtime. libnetwork − Provides networking support for containers, allowing them to communicate with one another and with external networks. SwarmKit − Provides orchestration functionalities to oversee a cluster of Docker hosts, enabling robust and scalable container deployments. Docker REST API Provides a collection of endpoints for connecting to the Docker daemon. Programmatically, users can manage containers, images, networks, and volumes by interacting with Docker through the API. Docker CLI The Docker daemon can be easily interacted with through the command-line interface (CLI). The CLI allows users to build, run, inspect, and manage Docker objects, including containers, by issuing commands. Docker Registry Docker containers are packaged, portable units that contain libraries, dependencies, runtime, and application code. Docker images are stored in the Docker Registry. The registry acts as a repository from which images can be pushed and pulled, making it easier to share and distribute applications that are containerized. How to Configure Docker Daemon? Configuring the Docker Daemon is essential to controlling how your containerization environment behaves and performs. Optimizing resource utilization, security, and scalability can be ensured by knowing how to start, stop, and configure Docker Daemon. To help you successfully navigate Docker Daemon configuration, we”ll walk you through each step of the process in this guide, complete with commands and thorough explanations. Starting Docker Daemon Before understanding how to configure Docker Daemon for best practices, you should ensure that the Daemon is running on your host machine. The process to start Docker Daemon might vary slightly depending on the host machine OS. To check whether Docker Daemon is running on your system, you can use the systemctl status command. $ sudo systemctl start docker Starting Docker Daemon on Linux To start Docker Daemon manually on Linux, you can use the following command − $ sudo systemctl start docker Starting Docker Daemon on Windows / macOS The easiest way to automatically start and use Docker is by installing Docker Desktop on Windows and Mac host systems. It provides a user-friendly interface to manage Docker. To start Docker Daemon, you can simply launch Docker Desktop. Configuring Docker Daemon You can customize the Docker containerization environments by setting custom options in the daemon file. This file is typically a JSON file, commonly located at “/etc/docker/daemon.json” on Linux. How to edit the Daemon Configuration File? You can open the Docker Daemon configuration file using a text editor. In Linux, you can use the vi or nano commands. For example − $ sudo nano /etc/docker/daemon.json $ sudo vi /etc/docker/daemon.json How to Set Daemon Options? You can make the change in the above-mentioned Docker Daemon JSON file to set Daemon options. For example, if you want to update the logging driver and log level, you can use the following lines − { “log-driver”: “json-file”, “log-level”: “debug” } How to Configure Network Settings in Docker? You can use Docker Daemon to configure network settings for container communication. For example, if you want to specify a custom subnet for Docker”s default bridge network, you can use the below lines. { “bip”: “172.20.0.1/16” } How to Change Default CPU and Memory in Docker Daemon? To prevent Docker contention, you must enforce resource constraints on containers. For example, you can add the below lines to restrict the maximum number of CPUs and memory usage per container − { “default-cpus”: “2”, “default-memory”: “2G” } How to Securing Docker Daemon? You can enhance the security of Docker Daemon by enabling TLS authentication, restricting access to Docker API, and configuring user namespaces for enhanced security. You can do so by using configurations like below − { “tls”: true, “tlscacert”: “/path/to/ca.pem”, “tlscert”: “/path/to/cert.pem”, “tlskey”: “/path/to/key.pem” } Common Issues Faced While Using Docker Daemon Let”s address or troubleshoot a few common issues faced by Docker users and the steps to resolve them. Issue 1. Docker Daemon Not Starting or Crashing Errors such as “Cannot connect to the Docker daemon” are frequently encountered by users when the Docker Daemon unexpectedly crashes or fails to start. The first step in fixing this is to look through the Docker Daemon logs (“journalctl -u docker.service” on Linux) to find specific error messages that occurred during startup. Try utilizing Docker Desktop on Windows/macOS or “systemctl restart docker” on Linux to restart the Docker service. Make sure that no services that Docker Daemon requires are using the same ports or resources in conflict. Reinstalling Docker might help to fix any possible conflicts if the issue continues. Issue 2. Resource Exhaustion CPU, memory, or disk space exhaustion are examples of resources that can cause system hangs, container crashes, or sluggish performance. Use tools such