Docker – Setting ASP.Net ASP.Net is the standard web development framework that is provided by Microsoft for developing server-side applications. Since ASP.Net has been around for quite a long time for development, Docker has ensured that it has support for ASP.Net. In this chapter, we will see the various steps for getting the Docker container for ASP.Net up and running. Prerequisites The following steps need to be carried out first for running ASP.Net. Step 1 − Since this can only run on Windows systems, you first need to ensure that you have either Windows 10 or Window Server 2016. Step 2 − Next, ensure that Hyper-V is and Containers are installed on the Windows system. To install Hyper–V and Containers, you can go to Turn Windows Features ON or OFF. Then ensure the Hyper-V option and Containers is checked and click the OK button. The system might require a restart after this operation. Step 3 − Next, you need to use the following Powershell command to install the 1.13.0rc4 version of Docker. The following command will download this and store it in the temp location. Invoke-WebRequest “https://test.docker.com/builds/Windows/x86_64/docker-1.13.0- rc4.zip” -OutFile “$env:TEMPdocker-1.13.0-rc4.zip” –UseBasicParsing Step 4 − Next, you need to expand the archive using the following powershell command. Expand-Archive -Path “$env:TEMPdocker-1.13.0-rc4.zip” -DestinationPath $env:ProgramFiles Step 5 − Next, you need to add the Docker Files to the environment variable using the following powershell command. $env:path += “;$env:ProgramFilesDocker” Step 6 − Next, you need to register the Docker Daemon Service using the following powershell command. dockerd –register-service Step 7 − Finally, you can start the docker daemon using the following command. Start-Service Docker Use the docker version command in powershell to verify that the docker daemon is working Installing the ASP.Net Container Let’s see how to install the ASP.Net container. Step 1 − The first step is to pull the image from Docker Hub. When you log into Docker Hub, you will be able to search and see the image for Microsoft/aspnet as shown below. Just type in asp in the search box and click on the Microsoft/aspnet link which comes up in the search results. Step 2 − You will see that the Docker pull command for ASP.Net in the details of the repository in Docker Hub. Step 3 − Go to Docker Host and run the Docker pull command for the microsoft/aspnet image. Note that the image is pretty large, somewhere close to 4.2 GB. Step 4 − Now go to the following location and download the entire Git repository. Step 5 − Create a folder called App in your C drive. Then copy the contents from the 4.6.2/sample folder to your C drive. Go the Docker File in the sample directory and issue the following command − docker build –t aspnet-site-new –build-arg site_root=/ The following points need to be noted about the above command − It builds a new image called aspnet-site-new from the Docker File. The root path is set to the localpath folder. Step 6 − Now it’s time to run the container. It can be done using the following command − docker run –d –p 8000:80 –name my-running-site-new aspnet-site-new Step 7 − You will now have IIS running in the Docker container. To find the IP Address of the Docker container, you can issue the Docker inspect command as shown below.
Category: docker
Docker – Data Storage By design, data should not generally be persisted directly in a Docker container for a few reasons. First, containers were always intended to be transient. In other words, they can be stopped, started, or, in theory, destroyed at any time. Data that are stored inside a container is consequently lost each time a container stops existing. With that said, this makes data persistence and recovering your data hard. Second, the writable layer of a container can be heavily coordinated with the host machine on which it is running, often making it hard to move it to another machine or to extract data. Furthermore, the writing in this layer is usually performed using a storage driver and a union file system, which may cause performance overhead compared to the writing of the host”s file system. Data can be stored within a container, too. This can lead to problems with scaling and sharing, as more than one container may wish to access the same data, making management and keeping the said data in synchronization complex. That is why it is much better to use the Docker volumes or bind mounts for storing data out of the container, which will provide persistence, portability, and easy access. In this chapter, let’s discuss on how volumes and bind mounts can be used to persist data in Docker containers. Different Ways to Persist Data in Docker Containers Whether you use mount types volume, bind mount, or tmpfs, the data inside the container is presented as a directory or file within the container”s filesystem. Here is the crucial difference: the location on the Docker host where the persistent data resides. Volumes live in a Docker-managed part of the host filesystem, usually at /var/lib/docker/volumes/ on Linux. This area is not accessible to natively running Docker processes, so volumes are the only applicable mechanism for holding data persistently in Docker. Bind mounts, on the other hand, can be located anywhere in a host system, even some crucial system files, and therefore, can be changed by a process not managed by Docker. This makes them more flexible but less isolated. Finally, tmpfs mounts exist only in the host system”s memory and never touch the underlying filesystem – perfect for ephemeral, non-persistent data. The -v or –volume flag allows specifying a mounting point for volumes or bind mounts. The syntax is slightly different: use the –tmpfs flag for tmpfs mounts. But for maximum readability and clarity, whenever possible, use –mount with all the options merged and nested inside. Docker Volumes Volumes are the preferred way for persisting data generated by and used in Docker containers. Docker manages them and is independent of whatever the host machine”s filesystem is. There are also several benefits to using them over other storage strategies like bind mounts. Key Features of Docker Volumes Persistence − Data stored in volumes will outlive the lifecycle of a stopped, removed, or replaced container. Portability − It”s easy to backup, migrate, or share among multiple containers with volumes. Management − Control and manage Docker volumes with Docker CLI commands or via the Docker API. Cross-platform compatibility − Runs on Linux and Windows containers with remarkable consistency. Performance − Volumes have more optimal performance with Docker Desktop than bind mounts from Mac and Windows hosts. Creating a Volume This is the basic command to create a new volume with the name “my-vol.” $ docker volume create my-vol Attach a Volume to a Container The below command attaches the “my-vol” volume to the “/app/data” directory within the container. If any data is written to this directory, it will be stored in the volume persistently. $ docker run -d –name my-container -v my-vol:/app/data my-image Listing Volumes This command lists all the volumes that are available in your Docker environment. $ docker volume ls Inspecting a Volume This command gives detailed information about the volume, including the mount point, driver, and other details. $ docker volume inspect my-vol Removing a Volume This command removes the “my-vol” volume. Warning: The data in the volume is destroyed irreversibly. $ docker volume rm my-vol Real-World Use Cases of Docker Volumes Databases − The database files of the data should be stored in a volume that will make it persistent across all container restarts. Web Server Content − Storing website files or user uploads within a volume, so even when the web server container is replaced, they remain accessible. Application Logs − Store logs in a volume for easy analysis and persistence. Docker volumes bring strong and flexible management of persistent data inside contained applications. Data remains secured and accessible even with the leverage of volumes in dynamic container environments. Bind Mounts Bind mounting in Docker is a way to directly share files or directories from the host machine into a Docker application. Bind mounts directly associate a file or directory from the host machine to a path in the container; unlike volumes, they do not need to be managed since Docker manages them. Key Features of Mount Bind Direct Access − Any changes made to the files on the host are immediately reflected within the container, and vice versa. Flexibility − You can mount any location on your host system, including system files, configuration files, or your project”s source code. Development Workflow − In development, bind mounts prove to be a boon for you, as you can edit code on your host drive, and the changes taking place in the running container are seen close to immediately. Mount Host Directory The below command mounts the current directory on your machine to the container”s ”/app” directory. Any changes to the files inside the current directory will reflect inside the container and vice versa. $ docker run -d –name my-container -v $(pwd):/app my-image Mount a Single File This would mount the host file “file.txt” to the path “/etc/config.txt” in the container. $ docker run -d –name my-container -v /path/to/file.txt:/etc/config.txt my-image Using the –mount Flag The –mount flag allows for more verbose specification on a
Docker – Toolbox In the introductory chapters, we have seen the installation of Docker toolbox on Windows. The Docker toolbox is developed so that Docker containers can be run on Windows and MacOS. The site for toolbox on Windows is For Windows, you need to have Windows 10 or Windows Server 2016 with Hyper-V enabled. The toolbox consists of the following components − Docker Engine − This is used as the base engine or Docker daemon that is used to run Docker containers. Docker Machine − for running Docker machine commands. Docker Compose for running Docker compose commands. Kinematic − This is the Docker GUI built for Windows and Mac OS. Oracle virtualbox Let’s now discuss the different types of activities that are possible with Docker toolbox. Running in Powershell With Docker toolbox on Windows 10, you can now run Docker commands off powershell. If you open powershell on Windows and type in the command of Docker version, you will get all the required details about the Docker version installed. Pulling Images and Running Containers You can also now pull Images from Docker Hub and run containers in powershell as you would do in Linux. The following example will show in brief the downloading of the Ubuntu image and running of the container off the image. The first step is to use the Docker pull command to pull the Ubuntu image from Docker Hub. The next step is to run the Docker image using the following run command − docker run –it ubuntu /bin/bash You will notice that the command is the same as it was in Linux. Kitematic This is the GUI equivalent of Docker on Windows. To open this GUI, go to the taskbar and on the Docker icon, right-click and choose to open Kitematic. It will prompt you to download Kitematic GUI. Once downloaded, just unzip the contents. There will be a file called Kitematic.exe. Double-click this exe file to open the GUI interface. You will then be requested to log into Docker Hub, enter through the GUI. Just enter the required username and password and then click the Login button. Once logged in, you will be able to see all the images downloaded on the system on the left-hand side of the interface. On the right-hand side, you will find all the images available on Docker Hub. Let’s take an example to understand how to download the Node image from Docker Hub using Kitematic. Step 1 − Enter the keyword of node in the search criteria. Step 2 − Click the create button on official Node image. You will then see the image being downloaded. Once the image has been downloaded, it will then start running the Node container. Step 3 − If you go to the settings tab, you can drill-down to further settings options, as shown below. General settings − In this tab, you can name the container, change the path settings, and delete the container. Ports − Here you can see the different port mappings. If you want, you can create your own port mappings. Volumes − Here you can see the different volume mappings. Advanced − It contains the advanced settings for the container.
Docker – Security Docker security is crucial to ensure that the containerized application remains fully functional and reliable. One of the primary concerns associated with Docker security is effective container isolation not to let malicious activities propagate. Docker utilizes Linux kernel namespaces and groups or control groups to isolate processes and resources. By establishing namespaces, each container can possess an exclusive environment and not have direct access to the resources held by other containers. On the other hand, groups limit resources that a container can consume, be it the CPU, memory, disk I/O, etc. to keep distribution fair and the system from getting exhausted. Just using these features of the Linux kernel, Docker maintains an excellent baseline security model, thus helping to mitigate common threats. It is the image management and proper handling of the Docker daemon that becomes another critical area in ensuring Docker security. This includes ensuring that images come from trusted repositories and are regularly scanned for vulnerabilities, a means of not deploying compromised containers. Exemplary tools that assist in the verification of images for both integrity and security are Docker Content Trust and Image scanning services. There are four major areas to consider when reviewing Docker security − The intrinsic security of the kernel and support for namespaces and cgroups The attack surface that is exposed by the Docker daemon alone Loopholes in container configuration profiles either by default or when the user customizes them. The “hardening” security features of the kernel and how the security features interact with containers. Let’s discuss more aspects of Docker Container Security in this chapter. Kernel Namespaces Namespaces are basically what Docker uses to run the containers isolated. The namespaces partition kernel resources whereby a set of processes sees one set of resources, and another set of processes sees a different set of resources. Docker uses the following kinds of namespaces − PID Namespace − It isolates the process IDs, which means a process ID inside a container will be different from that on the host. Mount Namespace − This isolates mount points in the file system to ensure the isolation of file systems seen inside the container from those on the host. Network Namespace − Isolates the networking by interfaces, IP addresses, and routing tables. UTS Namespace − Isolates kernel and version identifiers. IPC Namespace − Isolates the IPC resources to message queues, semaphores, and shared memory, among others. User Namespace − Isolates user and group IDs, allowing the container to run as non-root inside the container but map to root on the host. Docker achieves the isolation of containers from each other and the host by using these namespaces. Control Groups Another essential safety feature of Docker to provide resource isolation and management is Control groups. Cgroups control the amount of system resources a container can consume, preventing one single container from exhausting system resources and leeching on the host”s system to other containers. Some examples of crucial resource controls offered by groups are − CPU − Shares the CPU with the container and sets the container”s CPU usage limit. Memory − Constrains the memory usage by a container, and also the swap memory – to prevent a container from utilizing more memory than what is allocated. Disk I/O − Determines how quickly the program does reading and writing to a disk. Network − Manages bandwidth allocation for the network. Docker equally distributes resources among containers, avoiding resource contention, and thus enhancing overall system stability and security. Docker Daemon Attack Surface The Docker daemon runs with root-level privileges, a serious security concern. An attacker can compromise it and gain control over the whole host system. To reduce the attack surface, best practices must be followed − Limit Access − Allow access to the Docker daemon only for specific users while controlling those who can run Docker commands – restricted access to secured communication with Unix socket permissions or TCP with TLS. Use Rootless Mode − Use Docker”s rootless mode as much as possible; with this mode, daemon and containers run without root privilege. It is regarded as one way to reduce the possibility of privilege escalation. Network Security − Ensure that the Docker daemon API is not exposed on the public Internet. If Docker daemon API access is required from remote places, secure it through firewall rules and VPN. Regular Updates − Keep Docker and the base OS updated to safeguard against any identified vulnerability. Capabilities of the Linux Kernel Linux kernel capabilities provide fine-grained controls over the privileges given to processes. Docker uses the capabilities to reduce the number of privileges a container can be allocated, where a container is historically allocated only the necessary capabilities for its operation. The following capabilities are in use − CAP_NET_BIND_SERVICE − Allow binding to ports below 1024. CAP_SYS_ADMIN − This capability allows many different system administration operations. CAP_SYS_PTRACE − This capability allows a process to trace other processes. Docker drops many capabilities by default to lessen the potential for privilege escalation. Users can use the `–cap-add` and `–cap-drop` options to, respectively, add back in or drop additional specific capabilities upon launching containers, enabling fine-tuning of the security profile to the particular needs of their applications. Docker Content Trust Signature Verification Docker Content Trust (DCT) provides image signing and verification. This guarantees the images are not tampered with and come from whomever they appear to be. When DCT is enabled, Docker will check the digital signatures of the images before pulling or running, ensuring the usage of only trusted images. Here are some of the key features that make DCT such an essential part of your secure supply chain − Image Signing − The developers can sign images using their private keys. Signature Validation − Docker verifies these signatures against the public keys associated with the attached to ensure that the image is either unchanged or tamper-free. Immutable Tags − Protect against accidental overwrites of signed images. Enabling DCT supports the added security of the overall system, whereby only verified
Docker – Dockerfile A Dockerfile is a text document in which you can lay down all the instructions that you want for an image to be created. The first entry in the file specifies the base image, which is a pre-made image containing all the dependencies you need for your application. Then, there are commands you can send to the Dockerfile to install additional software, copy files, or run scripts. The result is a Docker image: a self-sufficient, executable file with all the information needed to run an application. Dockerfiles are a compelling way to create and deploy applications. They help in creating an environment consistently reproducibly, and in an easier way. Dockerfiles also automate the deployment process. A Dockerfile is used to create new custom images prepared individually according to specific needs. For instance, a Docker image can have a particular version of a web server or, for example, a database server. Important Instructions used in Dockerfile A Dockerfile is a text document that includes all the different steps and instructions on how to build a Docker image. The main elements described in the Dockerfile are the base image, required dependencies, and commands to execute application deployment within a container. The essential instructions of a Dockerfile are illustrated below − FROM This instruction sets the base image on which the new image is going to be built upon. It is usually the first instruction in a Dockerfile. FROM ubuntu:22 RUN This will be an instruction that will be executed for running the commands inside the container while building. It typically can be utilized to install an application, update libraries, or do general setup. RUN apt-get update && apt-get install -y python3 COPY This instruction copies files and directories from the host machine into the container image. COPY ./app /app ADD Like COPY but more advanced in features like it auto-decompresses archives and fetches files from URLs. ADD https://example.com/file.tar.gz /app WORKDIR The instruction sets the working directory where the subsequent commands in a Dockerfile will be executed. WORKDIR /app ENV The ENV instruction in this command defines environment variables within the container. ENV FLASK_APP=main.py EXPOSE This option defines to Docker that the container listens on the declared network ports at runtime. EXPOSE 8000 CMD Defines defaults for an executing container. There can only be one CMD instruction in a Dockerfile. If you list more than one CMD, then only the last CMD will take effect. CMD [“python3”, “main.py”] ENTRYPOINT This is an instruction that enables the configuration of a container to run the container as an executable. ENTRYPOINT [“python3”, “main.py”] LABEL This command provides meta-information for an image, like details of the maintainer, version, or description. LABEL maintainer=”[email protected]” ARG This command defines a variable that allows users to be passed to the builder at build time using the “–build-arg” flag on the docker build command. ARG version=1 VOLUME It simply creates a mount point and assigns the given name to it, indicating that it will hold externally mounted volumes from the native host or other containers. VOLUME /app/data USER This instruction allows the setting of the username (or UID) and optionally the group (or GID) to be used when running that image and for any RUN, CMD, and ENTRYPOINT instructions that follow it in the Dockerfile. USER johndoe These are probably the most common and vital instructions used in a Dockerfile. However, the instructions and their order would, of course, vary according to the specific application to be containerized. Best Practices for Dockerfile A nicely put Dockerfile is central to all efficient and secure containerized applications. A Dockerfile is a blueprint for building Docker images and details the environment, dependencies, and configurations needed to operate your application smoothly. Through best practices, you can create leaner, faster, and more reliable Docker images, which eventually automate workflows in development and increase efficiency in the application. Given below is a set of 10 fundamental Dockerfile best practices − Use Official Base Images − Build on top of the official Docker Hub images. They tend to be minimal and well-maintained. Usually, they are optimized for security and size, laying a solid foundation for a custom image. Use multi-stage builds to slash your final image size by dropping unwanted build tools and dependencies. This way, you partition the build and runtime environment to attain peak efficiencies. Minimize the Number of Layers − As you learned earlier, each instruction in a Dockerfile creates a layer. Whenever possible, combine any commands related to one another in a single RUN instruction. This will help reduce the number of layers created for any build, making builds more cacheable. Leverage Build Cache − Ensure that Dockerfile instructions that can change more frequently, such as COPY, are placed towards the end. This would enable building again more rapidly upon making changes at later stages. Install Only Necessary Packages − Install only necessary packages and dependencies in your application to reduce the image size and possible vulnerabilities there. Use ”.dockerignore” − To exclude unnecessary files and directories from the build context, add a ”.dockerignore” file. This will speed up builds and prevent sensitive information from being leaked into your image. Use Non-Root User − Run containers with a non-root user to enhance security. It is always a good idea to give a specific user and group in Dockerfile another isolation layer. Image Scanning − Scan your Docker images often for vulnerabilities. With technologies such as Trivy and Clair, there are several tools that you could use for this kind of scanning. Keep your base images and dependencies up to date at all times to minimize the potential risk. Document your Dockerfile − Comment and explain your Dockerfile, you”ll thank yourself later. This helps others, even your future self, understand the build process. Pin Versions − Pin versions for base images and dependencies, as this ensures reproducibility and avoids any unintended issues by getting updated. You can now optimize your container builds for speed, security, and maintainability by creating robust and efficient
Docker – Kubernetes Architecture Kubernetes is an orchestration framework for Docker containers which helps expose containers as services to the outside world. For example, you can have two services − One service would contain nginx and mongoDB, and another service would contain nginx and redis. Each service can have an IP or service point which can be connected by other applications. Kubernetes is then used to manage these services. The following diagram shows in a simplistic format how Kubernetes works from an architecture point of view. The minion is the node on which all the services run. You can have many minions running at one point in time. Each minion will host one or more POD. Each POD is like hosting a service. Each POD then contains the Docker containers. Each POD can host a different set of Docker containers. The proxy is then used to control the exposing of these services to the outside world. Kubernetes has several components in its architecture. The role of each component is explained below &mius; etcd − This component is a highly available key-value store that is used for storing shared configuration and service discovery. Here the various applications will be able to connect to the services via the discovery service. Flannel − This is a backend network which is required for the containers. kube-apiserver − This is an API which can be used to orchestrate the Docker containers. kube-controller-manager − This is used to control the Kubernetes services. kube-scheduler − This is used to schedule the containers on hosts. Kubelet − This is used to control the launching of containers via manifest files. kube-proxy − This is used to provide network proxy services to the outside world.
Docker – Image Layering and Caching Docker image layers are fundamental components of the Docker architecture, serving as the building blocks for Docker images. As a read-only layer that adds to the final image, each image layer represents a distinct instruction from a Dockerfile. Following a base layer – typically an operating system like Ubuntu – further layers are added to the process. Application code, environment settings, and software installations are examples of these layers. In order to maintain isolation and immutability between each layer and enable them to stack and appear as a single file system, Docker employs a union file system. The efficiency and reusability benefits of layering are substantial. Docker ensures that common layers shared by various images are reused through layer caching, which reduces build time and storage requirements. Additionally, because of this layer caching, image distribution is made more efficient, as only the only newly added layers need to be transferred during updates. Furthermore, layers” immutability ensures that once a layer is created, it never changes, simplifying version control and guaranteeing consistency across various environments. Components of Docker Image Layers Every layer in a Docker image represents a set of instructions taken from the Dockerfile. These layers are divided into three groups: base, intermediate, and top layers. Each group has a specific function in the process of creating an image. Base Layer The minimal operating system or runtime environment required to support the application is usually found in the base layer, which forms the basis of a Docker image. The majority of the time, it is created from an already-existing image, like node, alpine, or Linux. Since it establishes the framework for all upcoming layers to function in, this layer is essential. To provide a standardized starting point, the base layer frequently contains necessary libraries and dependencies shared by numerous applications. It is possible for developers to simplify the development and deployment process across various environments by ensuring that their applications have a dependable and consistent base image. Intermediate Layer The layers that are added on top of the base layer are called intermediate layers. A single Dockerfile instruction, such as RUN, COPY, or ADD, is correlated with each intermediate layer. Certain application dependencies, configuration files, and other essential elements that supplement the base layer are included in these layers. Installing software packages, transferring source code into the image, or configuring environment variables are a few examples of tasks that could be done in an intermediate layer. The application environment must be gradually built up, and this requires intermediate layers. Since each layer is immutable, adding or modifying one causes the creation of new layers rather than changes to already existing ones. Because each layer is immutable, efficiency is increased and redundancy is decreased as each layer is consistent and reusable across various images. Top Layer The last layer in the Docker image is the top layer, also known as the application layer. This layer contains the actual code for the application as well as any last-minute setups required for it to function. The base environment and the small adjustments made by the intermediate layers are combined to create a finished and executable application in the top layer, which is the result of all the work done by the layers that came before it. To differentiate one image from another, the top layer is unique to the containerized application. It is the contents of this top layer that are most directly interacted with during runtime when the image is executed to create a container. What are Cache Layers in Docker Images? In order to maximize and expedite the creation of Docker images, cache layers are an essential component of the image build process in Docker. They are designed to reuse previously built layers whenever possible. Reducing the amount of time and computational power needed to create Docker images on a regular basis and improving efficiency are made possible by this mechanism. Docker executes every command in the Dockerfile one after the other when you build a Docker image. Docker verifies that an instruction has never been executed with the same context before for each one. If so, Docker doesn”t need to create a new layer – it can reuse the one that was already created. “Layer caching” is the term for this procedure. The build process can be accelerated considerably by using Docker to skip steps that haven”t changed because the cache layers contain all intermediate layers created during the build process. How do Cache Layers Work? Instruction Matching − Docker searches for a cached layer that matches each instruction in the Dockerfile after evaluating each one. The context—such as the files included in a COPY instruction or the precise command in a RUN instruction—and the instruction itself determine whether two things match. Layer Reuse − Docker reuses the current layer rather than building a new one if it discovers a match in its cache. As a result, Docker avoids repeating the instruction, saving both time and resources. Cache invalidation − It is the process of invalidating an instruction when its context changes. Docker will have to rebuild the layer and all subsequent layers, for instance, if a file used in a COPY instruction is changed and there isn”t a matching cached layer found. Benefits of Cache Layers Build Speed − The shorter build time seems to be the main advantage. Docker can expedite the build process considerably by reusing existing layers, particularly for large images with numerous layers. Resource Efficiency − Reusing layers minimizes the amount of data that needs to be processed and stored and conserves computational resources. Consistency − By reusing layers that have already been tested and validated, cache layers guarantee consistent builds and lower the risk of introducing new errors during rebuilds. Cache Layers: Limitations and Considerations While cache layers provide many benefits, they also have some limitations − Cache Size − The cache can take up a lot of disk space, and it can be difficult to manage
Docker – Building Files We created our Docker File in the last chapter. It’s now time to build the Docker File. The Docker File can be built with the following command − docker build Let’s learn more about this command. docker build This method allows the users to build their own Docker images. Syntax docker build -t ImageName:TagName dir Options -t − is to mention a tag to the image ImageName − This is the name you want to give to your image. TagName − This is the tag you want to give to your image. Dir − The directory where the Docker File is present. Return Value None Example sudo docker build –t myimage:0.1. Here, myimage is the name we are giving to the Image and 0.1 is the tag number we are giving to our image. Since the Docker File is in the present working directory, we used “.” at the end of the command to signify the present working directory. Output From the output, you will first see that the Ubuntu Image will be downloaded from Docker Hub, because there is no image available locally on the machine. Finally, when the build is complete, all the necessary commands would have run on the image. You will then see the successfully built message and the ID of the new Image. When you run the Docker images command, you would then be able to see your new image. You can now build containers from your new Image.
Docker – Compose Docker Compose is used to run multiple containers as a single service. For example, suppose you had an application which required NGNIX and MySQL, you could create one file which would start both the containers as a service without the need to start each one separately. In this chapter, we will see how to get started with Docker Compose. Then, we will look at how to get a simple service with MySQL and NGNIX up and running using Docker Compose. Docker Compose ─ Installation The following steps need to be followed to get Docker Compose up and running. Step 1 − Download the necessary files from github using the following command − curl -L “https://github.com/docker/compose/releases/download/1.10.0-rc2/dockercompose -$(uname -s) -$(uname -m)” -o /home/demo/docker-compose The above command will download the latest version of Docker Compose which at the time of writing this article is 1.10.0-rc2. It will then store it in the directory /home/demo/. Step 2 − Next, we need to provide execute privileges to the downloaded Docker Compose file, using the following command − chmod +x /home/demo/docker-compose We can then use the following command to see the compose version. Syntax docker-compose version Parameters version − This is used to specify that we want the details of the version of Docker Compose. Output The version details of Docker Compose will be displayed. Example The following example shows how to get the docker-compose version. sudo ./docker-compose -version Output You will then get the following output − Creating Your First Docker-Compose File Now let’s go ahead and create our first Docker Compose file. All Docker Compose files are YAML files. You can create one using the vim editor. So execute the following command to create the compose file − sudo vim docker-compose.yml Let’s take a close look at the various details of this file − The database and web keyword are used to define two separate services. One will be running our mysql database and the other will be our nginx web server. The image keyword is used to specify the image from dockerhub for our mysql and nginx containers For the database, we are using the ports keyword to mention the ports that need to be exposed for mysql. And then, we also specify the environment variables for mysql which are required to run mysql. Now let’s run our Docker Compose file using the following command − sudo ./docker-compose up This command will take the docker-compose.yml file in your local directory and start building the containers. Once executed, all the images will start downloading and the containers will start automatically. And when you do a docker ps, you can see that the containers are indeed up and running.
Docker – Images What are Docker Images? Docker images are self-contained templates that are used to build containers. They make use of a tiered file system to store data effectively. Each layer, which contains instructions such as downloading software packages or transferring configuration files, represents a particular phase in the image generation process. Only the updated layers need to be recreated and delivered, making layering an effective way to share and update images. A text file known as a Dockerfile forms the basis of a Docker image. The instructions for creating the image layer by layer are contained in this file. In most cases, an instruction begins with a term such as “FROM” to identify the base image, which is usually a minimal Linux distribution. Commands such as “RUN” are then used to carry out particular operations within a layer. As a result, the environment inside the container can be managed precisely. Docker images are read-only templates, so any changes you make to the running program happen inside a container, not to the image itself. By doing this, a clear division is maintained between the runtime state (container) and the application definition (image). In addition, since new versions may be made with targeted modifications without affecting already-existing containers, image versioning and maintenance are made simpler. Key Components and Concepts of Docker Images Here are a few key components that makeup Docker Images. Layers Docker images consist of several layers. Every layer denotes a collection of filesystem modifications. Each Dockerfile instruction adds a layer on top of the previous one while building a Docker image. Layers are unchangeable once they are produced, which makes them immutable. Because of its immutability, Docker can effectively reuse layers during image builds and deploys, which speeds up build times and uses less disk space. Base Image The foundation upon which your customized Docker image is built is a base image. Usually, it has the bare minimum runtime environment and operating system needed to complete your application. Base images from CentOS, Ubuntu, Debian, and Alpine Linux are frequently used. For compatibility and to minimize image size, selecting the appropriate foundation image is crucial. Dockerfile Dockerfile is a text document with a set of instructions for creating a Docker image. These instructions describe how to create the basic image, add files and directories, install dependencies, adjust settings, and define the container”s entry point. By specifying the build process in a Dockerfile, you can automate and replicate the image creation process, assuring consistency across environments. Image Registry Docker images can be stored in either public or private registries, such as Azure Container Registry (ACR), Google Container Registry (GCR), Amazon Elastic Container Registry (ECR), and Docker Hub. Registries offer a centralized area for managing, sharing, and distributing Docker images. They also provide image scanning for security flaws, versioning, and access control. Tagging A repository name and a tag combine to form a unique identification for Docker images. Tags are used to distinguish between various image versions. When no tag is given, Docker uses the “latest” tag by default. To maintain reproducibility and track image versions, it is recommended to utilize semantic versioning or other relevant tags. Image Pulling and Pushing The docker pull command can be used to download Docker images to a local system from a registry. Similarly, the docker push command can be used to push images from a local machine to a registry. This enables you to distribute your images to various environments or share them with others. Layer Caching For performance optimization, Docker uses layer caching while building images. When you construct an image, Docker leverages previously built cached layers if the associated Dockerfile instructions haven”t changed. This drastically cuts down on build times, particularly for big projects with intricate dependencies. Useful Docker Image Commands Now that we have discussed what Docker Images are, let’s have a look at the basic and most useful Docker Image commands that you will use very frequently. Listing all Docker Images To see a list of all the Docker images that are present on your local computer, you can use the “docker images” command. It gives important details like the size, creation time, image ID, tag, and repository name. Using this command, you may quickly see which images are available to run containers on your system. $ docker images If you want to just display the Image IDs, you can use the “–quiet” flag. $ docker image ls -q Pulling Docker Images To download Docker images to your local computer from a registry, use the Docker pull command. Docker will automatically pull the “latest” version of the image if no tag is specified. Before launching containers based on images, this command is necessary to fetch images from public or private registries. $ docker pull ubuntu:20.04 Building Docker Images from Dockerfile The docker build command creates a Docker image from a Dockerfile placed at the provided path. During the build process, Docker follows the instructions in the Dockerfile to generate layers and assemble the final image. This command is essential for creating customized images that are tailored to your application”s specific needs. Dockerfile # Use a base image from Docker Hub FROM alpine:3.14 # Set the working directory inside the container WORKDIR /app # Copy the application files from the host machine to the container COPY . . # Expose a port for the application (optional) EXPOSE 8080 # Define the command to run when the container starts CMD [“./myapp”] For the above Dockerfile, you can build an image using the below command. $ docker build -t myapp:latest . Tagging Docker Images The docker tag command creates a new tag for an existing Docker image. Tags allow you to label and reference multiple versions of an image. This command is frequently used before uploading an image to a registry under a different tag. $ docker tag myapp:latest myrepo/myapp:v1.0 Pushing Docker Images The docker push command transfers a Docker image from your local machine to a registry, such as Docker