Docker – Commands

Docker – Instruction Commands ”; Previous Next Docker has a host of instruction commands. These are commands that are put in the Docker File. Let’s look at the ones which are available. CMD Instruction This command is used to execute a command at runtime when the container is executed. Syntax CMD command param1 Options command − This is the command to run when the container is launched. param1 − This is the parameter entered to the command. Return Value The command will execute accordingly. Example In our example, we will enter a simple Hello World echo in our Docker File and create an image and launch a container from it. Step 1 − Build the Docker File with the following commands − FROM ubuntu MAINTAINER [email protected] CMD [“echo” , “hello world”] Here, the CMD is just used to print hello world. Step 2 − Build the image using the Docker build command. Step 3 − Run a container from the image. ENTRYPOINT This command can also be used to execute commands at runtime for the container. But we can be more flexible with the ENTRYPOINT command. Syntax ENTRYPOINT command param1 Options command − This is the command to run when the container is launched. param1 − This is the parameter entered into the command. Return Value The command will execute accordingly. Example Let’s take a look at an example to understand more about ENTRYPOINT. In our example, we will enter a simple echo command in our Docker File and create an image and launch a container from it. Step 1 − Build the Docker File with the following commands − FROM ubuntu MAINTAINER [email protected] ENTRYPOINT [“echo”] Step 2 − Build the image using the Docker build command. Step 3 − Run a container from the image. ENV This command is used to set environment variables in the container. Syntax ENV key value Options Key − This is the key for the environment variable. value − This is the value for the environment variable. Return Value The command will execute accordingly. Example In our example, we will enter a simple echo command in our Docker File and create an image and launch a container from it. Step 1 − Build the Docker File with the following commands − FROM ubuntu MAINTAINER [email protected] ENV var1=Tutorial var2=point Step 2 − Build the image using the Docker build command. Step 3 − Run a container from the image. Step 4 − Finally, execute the env command to see the environment variables. WORKDIR This command is used to set the working directory of the container. Syntax WORKDIR dirname Options dirname − The new working directory. If the directory does not exist, it will be added. Return Value The command will execute accordingly. Example In our example, we will enter a simple echo command in our Docker File and create an image and launch a container from it. Step 1 − Build the Docker File with the following commands − FROM ubuntu MAINTAINER [email protected] WORKDIR /newtemp CMD pwd Step 2 − Build the image using the Docker build command. Step 3 − Run a container from the image. Print Page Previous Next Advertisements ”;

Docker Setting – Redis

How to Setup and Run Redis in Docker? ”; Previous Next Redis is an open-source and in-memory data structure store that is widely known for its excellent performance in handling various data types such as strings, hashes, lists, sets, etc. Redis was originally developed as a caching solution, but it has evolved into a powerful tool for real-time analytics, message brokering, session management, and queuing systems. However, deploying and managing Redis instances can be difficult, especially in environments that are concerned with scalability, consistency, and resource utilization. This is where Docker’s lightweight containerization capabilities come into the picture. When you run Redis in Docker containers, you can consistently deploy, scale, and manage Redis instances across multiple environments. Here are the major ways to set up and run Redis inside Docker containers − Pulling the Redis Docker base image from the official Docker Hub repository and customizing Redis container settings via Dockerfile. Creating a Docker Compose file to define and manage Redis container configuration. Utilizing Kubernetes for orchestrating Redis containers in a clustered environment. In this chapter, let’s discuss how to setup and run Redis inside Docker containers using all these 3 approaches, in a detailed, step-by-step manner with examples and Docker commands. How to Setup and Run Redis in Docker using Dockerfile? Here”s a step-by-step guide on how to set up and run Redis using a Dockerfile: Step 1: Create a Dockerfile Start by creating a `Dockerfile` in your project directory. This file will contain the instructions and commands to build the Docker image with Redis pre-installed. # Use the official Redis image as the base image FROM redis:latest # Set metadata for the container LABEL maintainer=”Your Name <[email protected]>” # Expose Redis default port EXPOSE 6379 Explanation The `FROM` instruction specifies the base image that we will use. In this case, we will use the latest version of the official Redis image from Docker Hub. The `LABEL` instruction adds metadata to the image. Here, we have added the maintainer”s name and email. The `EXPOSE` instruction exposes Redis”s default port `6379`. This allows it to accept connections from other containers or the host machine. Step 2: Build the Docker Image Navigate to the directory where you have created your Dockefile and run the Docker build command below to create the Docker image. docker build -t my-redis-image . Explanation `docker build` is the command that can be used to build a Docker image. `-t my-redis-image` adds a tag to the image for easy reference. `.` specifies the build context. It indicates that the `Dockerfile` is located in the current directory. Step 3: Run the Redis Container Now that you have your Docker image built, you can run a container associated to that image using the Docker run command. docker run –name my-redis-container -d my-redis-image The `docker run` is the command that we have used to run a Docker container. The `–name my-redis-container` is used to assign a name to the running container for easy identification. The `-d` flag is used to run the container in detached mode, meaning it runs in the background. Then we specify the image name `my-redis-image` of the Docker image to be used for creating the container. Step 4: Verify the Container If you want to ensure that the Redis container is running successfully, you can use the below command to list all running containers − docker ps This will display information about the running Redis container, including its container ID, name, status, and ports. Step 5: Access Redis After verifying that the Redis container is running, you can now access it using Redis client tools like RedisInsight or connect to it from other applications and services. By default, Redis will be accessible on port `6379` that we mentioned in the Dockerfile. How to Run Redis in Docker using Docker Compose? Docker Compose simplifies the process of defining and managing multi-container Docker applications. Here”s how to run Redis in Docker using Docker Compose − Step 1: Create a Docker Compose File You can start by creating a new file named `docker-compose.yml` in your project directory. version: ”3.8” services: redis: image: redis:latest container_name: my-redis-container ports: – “6379:6379″ In this file, we have specified the `version: ”3.8”` version of Docker Compose syntax being used. Then, under `services`, we have defined the Redis service: The `image: redis:latest` specifies the Redis image to be pulled and used from Docker Hub. Next, we have defined the container name using the property `container_name: my-redis-container`. Finally, the port to be exposed has been specified using `ports` which maps port `6379` on the host machine to port `6379` in the container, allowing access to Redis. Step 2: Run Docker Compose Next, you can run the Docker compose command to start the container which has Redis installed in it. Navigate to the directory where you have created the compose yml file and run the below command. docker-compose up -d Explanation `docker-compose up` command is used to create and start Docker containers using the configurations defined in the `docker-compose.yml` file. `-d` flag is used to run the containers in detached mode, meaning they run in the background. Step 3: Verify and access Redis from the Container You can list all the running containers using the below command to verify if the Redis Docker container is running or not. docker ps With the Redis container running, you can now access it using Redis client tools like RedisInsight or connect to it from other applications. By default, Redis is accessible on port `6379`. Step 4: Stop and Remove Containers You can use the Docker compose

Docker – Discussion

Discuss Docker ”; Previous Next This tutorial explains the various aspects of the Docker Container service. Starting with the basics of Docker which focuses on the installation and configuration of Docker, it gradually moves on to advanced topics such as Networking and Registries. The last few chapters of this tutorial cover the development aspects of Docker and how you can get up and running on the development environments using Docker Containers. Print Page Previous Next Advertisements ”;

Docker – Logging

Docker – Logging ”; Previous Next Docker logs are vital in maintaining and troubleshooting applications running within containers. They provide real-time insight into container behavior and performance to help identify problems and, thus, optimize performance. They capture information wide enough to cover errors, warnings, and informative messages that an application and Docker engine might have produced. This data is beneficial for debugging: developers could – with this – trace which events had happened and led up to a specific issue, understand the context, and apply fixes. Moreover, Docker logs have great significance in monitoring and auditing. Logs should be collected and continuously analyzed to ensure the applications run smoothly and securely. It detects the occurrence of anomalies and unauthorized access, among other security threats, so that potential breaches can be responded to promptly. Docker logs enable essential visibility into the system and control for any proper stewardship of systems to maintain the resilience and robustness of a system in the production environment where uptime and reliability are of critical importance. In this chapter, let’s learn more about Docker logs and logging drivers. How is Docker Logging Different? Docker logs are different than traditional logs due to the containerized nature of Docker. Let’s have a look at the basic difference. Centralization and Aggregation Traditional logging often involves collecting logs from individual servers or applications, which tends to scale out and, with time, starts becoming cumbersome as the number of servers and applications continues to rise. In contrast, Docker logging often involves centralizing and aggregating logs from multiple containers running across various hosts. In a centralized approach, this would simplify log management and make it easy for logs to be watched or analyzed from a single point, even in an environment that is complex and distributed. Log Drivers and Plugins Docker provides a broad set of log drivers and plugins for tailoring the means of collecting, storing, and processing logs. These log drivers will allow sending logging to different storage points – JSON files, Syslog, Fluentd, AWS CloudWatch, and others. This way, numerous options can be provided to tune the logging setup to meet specific needs and preferences and integrate seamlessly into existing logging and monitoring tools. Ephemeral Nature of Containers Containers are ephemeral; they are meant to have a short life span and to be paused, stopped, or removed. This transient characteristic appears to be a bottleneck for the traditional methods of logging that primarily rely on persistent native storage of the host system. Docker solves this problem by storing logs out of container life cycles. Persistence is essential for maintaining a complete chronology of event history for critical diagnostic information that may be accessed even after the removal or replacement of containers. These subtle differences drive home one thing: Docker logging is designed for the dynamic and scalable property associated with containerized environments. It is reflected in the solutions around centralized log management, thus making log data available. Docker Logging Strategies and Best Practices Active logging is an essential process in managing or supporting your Dockerized applications. Log entries give incredible insight into the behavior of your applications, their performance, and issues, allowing managing proactively and troubleshooting quickly. Docker provides several ways to manage logging, each with its benefits and suitable use cases. Let’s discuss each of them one by one. Logging Through the Application The simplest way to log from Dockerized applications is actually from the application itself. This can be done by setting it up so that it logs using standard output (stdout) and standard error (stderr). Docker collects these outputs, so for the logs, you can retrieve them easily using the docker logs command. Advantages of Logging through the Application Ease of Implementation − It is easy to carry out and requires no additional configuration. Portability − Logs can easily be accessed natively by means of Docker”s logging. Compatibility − Works well with any app that can be configured to log out to stdout and stderr. Best Practices Structured Logging − Better parsing and analysis of logs, since it is in a structured format, i.e., JSON logs. Log Rotation − Use log rotation by an application to limit log file bloat. Log Levels − Set up appropriate log levels (e.g., debug, info, warning, error) to specify the verbosity of the logs. Data Volumes Logging Another approach is to have Docker data volumes where logs are stored in them. If you attach a volume to a directory inside the container that writes out logs, you can be confident that it will withstand the removal or restarting of the container. Advantages of Data Volumes Logging Persistence − The logs do not get lost by the destruction and recreation of the container. Separation of Concerns − Keep log storage away from containerized applications. Flexibility − External log management tools can access and process logs directly from the volume. Best Practices Volume Management − Monitor and manage log volume size to ensure disk space is not a problem. Backup and Retention − Implement your organization”s logging backup policies and their retention. Access Controls − Protect the volume of logs from unauthorized access. Logging with the Docker Logging Driver Docker includes numerous built-in logging drivers, which give flexible options for sending the container logs to various destinations: syslog, journald, Fluentd, and AWS CloudWatch, among others. It is configurable at the daemon and container levels. Advantages of Logging with the Docker Logging Driver Centralized Logging − Easily collect logs from several hosts and containers. CI/CD Integration − Includes integration with the existing infrastructure and tools for logging. Scalability − It supports multiple log storage back-ends and services. Best Practices Picking a

Docker Setting – BusyBox

How to Setup BusyBox in Docker Containers? ”; Previous Next BusyBox is a single executable binary that incorporates numerous standard Unix utilities, compressed into a single executable. When we incorporate BusyBox into Docker images, developers can minimize the overall image size, optimize resource utilization, and expedite container provisioning. Moreover, BusyBox”s comprehensive utility set provides containers with essential functionalities that empower them to fulfill diverse roles within complex microservices architectures. By integrating multiple tools into a single binary, BusyBox conserves disk space and also simplifies system administration and management tasks. Below are the different ways to create and run a Docker container with a BusyBox base image. Pulling BusyBox Docker image from Dockerhub and running the container. Creating custom Docker images with BusyBox base image using Dockerfile. Using Docker Compose to run BusyBox Docker containers. In this guide, let”s look at the step-by-step processes to run BusyBox Docker containers with the help of commands and examples. How to Pull and Run BusyBox Docker Image from Dockerhub? Here”s a step-by-step guide on how to pull and run the BusyBox Docker image from Docker Hub − Step 1: Pull the BusyBox Docker Image You can start by using the `docker pull` command to fetch the BusyBox image from Docker Hub. docker pull busybox When you run this command, it will retrieve the latest version of the BusyBox image by default. If you want to retrieve a specific version, you can specify the tag like this − docker pull busybox:<version> Step 2: Run a Container from the BusyBox Image Once you have pulled the image, you can create and start a container using the `docker run` command. docker run –name my_busybox busybox This command will create a new container named “my_busybox” using the BusyBox image. Step 3: Verify the Container To verify that the container is running, you can use the `docker ps` command − docker ps This command will list all the running containers on your system. You should see the “my_busybox” container in the list. Step 4: Access the Container”s Shell You can access the shell of the BusyBox container using the `docker exec` command − docker exec -it my_busybox sh This command will opens an interactive shell session within the “my_busybox” container. You can now execute commands within the BusyBox environment. To verify the proper installation of BusyBox, you can run the following command in the shell. busybox –help Step 5: Stop and Remove the Container You can stop and remove the container using the following commands once you are done with it. docker stop my_busybox docker rm my_busybox How to run BusyBox Container using Dockerfile? Below are the step-by-step instructions on how to run a BusyBox container using a Dockerfile − Step 1: Create a Dockerfile Create a file called Dockerfile with the following instructions inside it. This will be used to create the Docker image with the BusyBox base image. # Use BusyBox as the base image FROM busybox # Set a default command to run when the container starts CMD [“sh”] Step 2: Build the Docker Image Navigate to the directory where you created the Dockerfile and use the `docker build` command to build the Docker image based on that Dockerfile. docker build -t my_busybox . This command will build the Docker image with the tag `my_busybox`. The dot `.` at the end is used to specify the current directory as the build context. Step 3: Run the BusyBox Container Now that you have built the image, you can create and start a container from it using the following commands − docker run –name my_busybox_container my_busybox docker ps docker exec -it my_busybox_container sh busybox -help The Docker run command is used to create and run a container associated with the my_busybox image. It also provides a name to the container. The Docker ps command is used to list all the active containers running on your system. If you find a container called my_busybox_container, it means your container is running. You can then access the bash of the container by running the Docker exec command in interactive mode. Once you have access to the shell, you can very whether the busybox is running or not using the –help flag. How to run BusyBox Docker Containers using Docker Compose? Here are the steps to run BusyBox containers using Docker Compose. Step 1: Create a Docker Compose File Create a file named `docker-compose.yml`. Add the following properties to define the services. version: ”3.8” services: busybox: image: busybox command: sh Step 2: Run Docker Compose Navigate to the directory containing the `docker-compose.yml` file in your terminal and execute the following command − docker-compose up -d This command starts the BusyBox container in detached mode (`-d`), running in the background. Step 3: Verify the Container To ensure that the container is running, you can list the containers and access the bash − docker ps docker exec -it <container_id_or_name> sh busybox -help If you”re done with the container, you can stop and remove it using the following command − docker-compose down Conclusion To sum up, using BusyBox within Docker containers offers an advantage in deployment practices. Its lightweight and robust toolkit allows developers and system administrators to streamline operations, optimize resource utilization, and enhance overall efficiency within containerized environments. Frequently Asked Questions Q1. Can I use BusyBox in Docker for production environments? BusyBox is a lightweight and effective solution for containerized settings, however depending on use cases and needs, it may not be suitable for production. BusyBox might

Docker – Networking

Docker – Networking ”; Previous Next Docker takes care of the networking aspects so that the containers can communicate with other containers and also with the Docker Host. If you do an ifconfig on the Docker Host, you will see the Docker Ethernet adapter. This adapter is created when Docker is installed on the Docker Host. This is a bridge between the Docker Host and the Linux Host. Now let’s look at some commands associated with networking in Docker. Listing All Docker Networks This command can be used to list all the networks associated with Docker on the host. Syntax docker network ls Options None Return Value The command will output all the networks on the Docker Host. Example sudo docker network ls Output The output of the above command is shown below Inspecting a Docker network If you want to see more details on the network associated with Docker, you can use the Docker network inspect command. Syntax docker network inspect networkname Options networkname − This is the name of the network you need to inspect. Return Value The command will output all the details about the network. Example sudo docker network inspect bridge Output The output of the above command is shown below − Now let’s run a container and see what happens when we inspect the network again. Let’s spin up an Ubuntu container with the following command − sudo docker run –it ubuntu:latest /bin/bash Now if we inspect our network name via the following command, you will now see that the container is attached to the bridge. sudo docker network inspect bridge Creating Your Own New Network One can create a network in Docker before launching containers. This can be done with the following command − Syntax docker network create –-driver drivername name Options drivername − This is the name used for the network driver. name − This is the name given to the network. Return Value The command will output the long ID for the new network. Example sudo docker network create –-driver bridge new_nw Output The output of the above command is shown below − You can now attach the new network when launching the container. So let’s spin up an Ubuntu container with the following command − sudo docker run –it –network=new_nw ubuntu:latest /bin/bash And now when you inspect the network via the following command, you will see the container attached to the network. sudo docker network inspect new_nw Print Page Previous Next Advertisements ”;

Docker – Setting MongoDB

Docker – Setting MongoDB ”; Previous Next MongoDB is a famous document-oriented database that is used by many modern-day web applications. Since MongoDB is a popular database for development, Docker has also ensured it has support for MongoDB. We will now see the various steps for getting the Docker container for MongoDB up and running. Step 1 − The first step is to pull the image from Docker Hub. When you log into Docker Hub, you will be able to search and see the image for Mongo as shown below. Just type in Mongo in the search box and click on the Mongo (official) link which comes up in the search results. Step 2 − You will see that the Docker pull command for Mongo in the details of the repository in Docker Hub. Step 3 − On the Docker Host, use the Docker pull command as shown above to download the latest Mongo image from Docker Hub. Step 4 − Now that we have the image for Mongo, let’s first run a MongoDB container which will be our instance for MongoDB. For this, we will issue the following command − sudo docker run -it -d mongo The following points can be noted about the above command − The –it option is used to run the container in interactive mode. The –d option is used to run the container as a daemon process. And finally we are creating a container from the Mongo image. You can then issue the docker ps command to see the running containers − Take a note of the following points − The name of the container is tender_poitras. This name will be different since the name of the containers keep on changing when you spin up a container. But just make a note of the container which you have launched. Next, also notice the port number it is running on. It is listening on the TCP port of 27017. Step 5 − Now let’s spin up another container which will act as our client which will be used to connect to the MongoDB database. Let’s issue the following command for this − sudo docker run –it –link=tender_poitras:mongo mongo /bin/bash The following points can be noted about the above command − The –it option is used to run the container in interactive mode. We are now linking our new container to the already launched MongoDB server container. Here, you need to mention the name of the already launched container. We are then specifying that we want to launch the Mongo container as our client and then run the bin/bash shell in our new container. You will now be in the new container. Step 6 − Run the env command in the new container to see the details of how to connect to the MongoDB server container. Step 6 − Now it’s time to connect to the MongoDB server from the client container. We can do this via the following command − mongo 172.17.0.2:27017 The following points need to be noted about the above command The mongo command is the client mongo command that is used to connect to a MongoDB database. The IP and port number is what you get when you use the env command. Once you run the command, you will then be connected to the MongoDB database. You can then run any MongoDB command in the command prompt. In our example, we are running the following command − use demo This command is a MongoDB command which is used to switch to a database name demo. If the database is not available, it will be created. Now you have successfully created a client and server MongoDB container. Print Page Previous Next Advertisements ”;

Docker – Working of Kubernetes

Docker – Working of Kubernetes ”; Previous Next In this chapter, we will see how to install Kubenetes via kubeadm. This is a tool which helps in the installation of Kubernetes. Let’s go step by step and learn how to install Kubernetes. Step 1 − Ensure that the Ubuntu server version you are working on is 16.04. Step 2 − Ensure that you generate a ssh key which can be used for ssh login. You can do this using the following command. ssh-keygen This will generate a key in your home folder as shown below. Step 3 − Next, depending on the version of Ubuntu you have, you will need to add the relevant site to the docker.list for the apt package manager, so that it will be able to detect the Kubernetes packages from the kubernetes site and download them accordingly. We can do it using the following commands. curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add – echo “deb http://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/docker.list Step 4 − We then issue an apt-get update to ensure all packages are downloaded on the Ubuntu server. Step 5 − Install the Docker package as detailed in the earlier chapters. Step 6 − Now it’s time to install kubernetes by installing the following packages − apt-get install –y kubelet kubeadm kubectl kubernetes-cni Step 7 − Once all kubernetes packages are downloaded, it’s time to start the kubernetes controller using the following command − kubeadm init Once done, you will get a successful message that the master is up and running and nodes can now join the cluster. Print Page Previous Next Advertisements ”;

Docker – Web Server

Docker – Building a Web Server Docker File ”; Previous Next We have already learnt how to use Docker File to build our own custom images. Now let’s see how we can build a web server image which can be used to build containers. In our example, we are going to use the Apache Web Server on Ubuntu to build our image. Let’s follow the steps given below, to build our web server Docker file. Step 1 − The first step is to build our Docker File. Let’s use vim and create a Docker File with the following information. FROM ubuntu RUN apt-get update RUN apt-get install –y apache2 RUN apt-get install –y apache2-utils RUN apt-get clean EXPOSE 80 CMD [“apache2ctl”, “-D”, “FOREGROUND”] The following points need to be noted about the above statements − We are first creating our image to be from the Ubuntu base image. Next, we are going to use the RUN command to update all the packages on the Ubuntu system. Next, we use the RUN command to install apache2 on our image. Next, we use the RUN command to install the necessary utility apache2 packages on our image. Next, we use the RUN command to clean any unnecessary files from the system. The EXPOSE command is used to expose port 80 of Apache in the container to the Docker host. Finally, the CMD command is used to run apache2 in the background. Now that the file details have been entered, just save the file. Step 2 − Run the Docker build command to build the Docker file. It can be done using the following command − sudo docker build –t=”mywebserver” . We are tagging our image as mywebserver. Once the image is built, you will get a successful message that the file has been built. Step 3 − Now that the web server file has been built, it’s now time to create a container from the image. We can do this with the Docker run command. sudo docker run –d –p 80:80 mywebserver The following points need to be noted about the above command − The port number exposed by the container is 80. Hence with the –p command, we are mapping the same port number to the 80 port number on our localhost. The –d option is used to run the container in detached mode. This is so that the container can run in the background. If you go to port 80 of the Docker host in your web browser, you will now see that Apache is up and running. Print Page Previous Next Advertisements ”;

Docker – Useful Resources

Docker – Useful Resources ”; Previous Next The following resources contain additional information on Docker. Please use them to get more in-depth knowledge on this topic. Useful Video Courses Docker for the Absolute Beginners 34 Lectures 4 hours Mumshad Mannambeth More Detail Angular Essentials – Admin App, Typescript, Docker, c3.js 46 Lectures 4 hours Antonio Papa More Detail Laravel RESTful APIs – Admin App, Docker, Open API(Swagger) 36 Lectures 3.5 hours Antonio Papa More Detail Docker Course for .Net and Angular Developers 75 Lectures 5.5 hours Rahul Sahay More Detail Apache Airflow 2.0 using Docker, Docker Swarm 20 Lectures 1 hours Ganesh Dhareshwar More Detail Scaling Docker for AWS 64 Lectures 6 hours Stone River ELearning More Detail Print Page Previous Next Advertisements ”;