Docker – Continuous Integration ”; Previous Next Docker has integrations with many Continuous Integrations tools, which also includes the popular CI tool known as Jenkins. Within Jenkins, you have plugins available which can be used to work with containers. So let’s quickly look at a Docker plugin available for the Jenkins tool. Let’s go step by step and see what’s available in Jenkins for Docker containers. Step 1 − Go to your Jenkins dashboard and click Manage Jenkins. Step 2 − Go to Manage Plugins. Step 3 − Search for Docker plugins. Choose the Docker plugin and click the Install without restart button. Step 4 − Once the installation is completed, go to your job in the Jenkins dashboard. In our example, we have a job called Demo. Step 5 − In the job, when you go to the Build step, you can now see the option to start and stop containers. Step 6 − As a simple example, you can choose the further option to stop containers when the build is completed. Then, click the Save button. Now, just run your job in Jenkins. In the Console output, you will now be able to see that the command to Stop All containers has run. Print Page Previous Next Advertisements ”;
Category: docker
Docker – Working with Containers ”; Previous Next In this chapter, we will explore in detail what we can do with containers. docker top With this command, you can see the top processes within a container. Syntax docker top ContainerID Options ContainerID − This is the Container ID for which you want to see the top processes. Return Value The output will show the top-level processes within a container. Example sudo docker top 9f215ed0b0d3 The above command will show the top-level processes within a container. Output When we run the above command, it will produce the following result − docker stop This command is used to stop a running container. Syntax docker stop ContainerID Options ContainerID − This is the Container ID which needs to be stopped. Return Value The output will give the ID of the stopped container. Example sudo docker stop 9f215ed0b0d3 The above command will stop the Docker container 9f215ed0b0d3. Output When we run the above command, it will produce the following result − docker rm This command is used to delete a container. Syntax docker rm ContainerID Options ContainerID − This is the Container ID which needs to be removed. Return Value The output will give the ID of the removed container. Example sudo docker rm 9f215ed0b0d3 The above command will remove the Docker container 9f215ed0b0d3. Output When we run the above command, it will produce the following result − docker stats This command is used to provide the statistics of a running container. Syntax docker stats ContainerID Options ContainerID − This is the Container ID for which the stats need to be provided. Return Value The output will show the CPU and Memory utilization of the Container. Example sudo docker stats 9f215ed0b0d3 The above command will provide CPU and memory utilization of the Container 9f215ed0b0d3. Output When we run the above command, it will produce the following result − docker attach This command is used to attach to a running container. Syntax docker attach ContainerID Options ContainerID − This is the Container ID to which you need to attach. Return Value None Example sudo docker attach 07b0b6f434fe The above command will attach to the Docker container 07b0b6f434fe. Output When we run the above command, it will produce the following result − Once you have attached to the Docker container, you can run the above command to see the process utilization in that Docker container. docker pause This command is used to pause the processes in a running container. Syntax docker pause ContainerID Options ContainerID − This is the Container ID to which you need to pause the processes in the container. Return Value The ContainerID of the paused container. Example sudo docker pause 07b0b6f434fe The above command will pause the processes in a running container 07b0b6f434fe. Output When we run the above command, it will produce the following result − docker unpause This command is used to unpause the processes in a running container. Syntax docker unpause ContainerID Options ContainerID − This is the Container ID to which you need to unpause the processes in the container. Return Value The ContainerID of the running container. Example sudo docker unpause 07b0b6f434fe The above command will unpause the processes in a running container: 07b0b6f434fe Output When we run the above command, it will produce the following result − docker kill This command is used to kill the processes in a running container. Syntax docker kill ContainerID Options ContainerID − This is the Container ID to which you need to kill the processes in the container. Return Value The ContainerID of the running container. Example sudo docker kill 07b0b6f434fe The above command will kill the processes in the running container 07b0b6f434fe. Output When we run the above command, it will produce the following result − Docker – Container Lifecycle The following illustration explains the entire lifecycle of a Docker container. Initially, the Docker container will be in the created state. Then the Docker container goes into the running state when the Docker run command is used. The Docker kill command is used to kill an existing Docker container. The Docker pause command is used to pause an existing Docker container. The Docker stop command is used to pause an existing Docker container. The Docker run command is used to put a container back from a stopped state to a running state. Print Page Previous Next Advertisements ”;
Docker – Containers
Docker – Containers ”; Previous Next A Docker container is a runtime instance of a Docker image. They can be created by instantiating the image. Docker containers are completely changing software development, deployment, and management. In essence, Docker containers bundle an application along with all of its dependencies into a compact, light package. They can operate reliably in a range of computing environments by using virtualization at the operating system level. This encapsulation is accomplished through the use of Docker images. Images are essentially blueprints that contain all the files, libraries, and configurations required to run a particular application. Since containers isolate the application and its dependencies from the underlying system, they offer consistency and predictability across a range of environments. Docker Containers function as independent processes with their filesystem, network interface, and resources, but they are lightweight and efficient because they share the same kernel as the host operating system. They rely on key components of the Docker ecosystem to work, including the Docker Engine, which builds, launches, and manages containers, and the Docker Registry, which serves as a repository for Docker images. In this chapter, let’s understand how containers work and the important Docker container commands that you will you most frequently. Key Concepts of Docker Containers Here are the key concepts and principles that work behind Docker Containers. Containerization Essentially, Containers function based on the concept of containerization, which is packing an application together with all of its dependencies into a single package. This package, referred to as a container image, includes all of the necessary runtime environments, libraries, and other components needed to run the application. Isolation Operating system-level virtualization is used by Docker containers to offer application isolation. With its filesystem, network interface, and process space, each container operates independently of the host system as a separate process. By maintaining their independence from one another, containers are kept from interfering with one another”s operations thanks to this isolation. Docker Engine The Docker Engine is the brains behind Docker containers; it builds, launches, and maintains them. The Docker daemon, which operates in the background, and the Docker client, which lets users communicate with the Docker daemon via commands, are two of the parts that make up the Docker Engine. Image and Container Lifecycle The creation of a container image is the first step in the lifecycle of a Docker container. A Dockerfile, which outlines the application”s dependencies and configuration, is used to build this image. The image can be used to instantiate containers, which are instances of the image that are running after it has been created. It is possible to start, stop, pause, and restart containers as one. Resource Management Docker containers provide effective resource management because of their shared kernel architecture and lightweight design. Since containers share the operating system kernel of the host system, overhead is decreased and startup times are accelerated. To ensure maximum performance and scalability, Docker also offers tools for resource usage monitoring and control. Portability One of the main benefits of Docker containers is their portability. Container images are self-contained units that are easily deployable and distributed throughout various environments, ranging from production to testing and development. This portability streamlines the deployment process and lowers the possibility of compatibility problems by enabling “build once, run anywhere”. Docker Container Lifecycle There are five essential phases in the Docker container lifecycle: created, started, paused, exited, and dead. The lifecycle of a container is represented by its stages, which range from creation and execution to termination and possible recovery. Comprehending these phases is crucial for proficiently overseeing Docker containers and guaranteeing their appropriate operation in a containerized setting. Let”s explore the stages of the Docker container lifecycle: The Created State The “created” state is the first stage. When a container is created with the docker create command or a comparable API call, it reaches this phase. The container is not yet running when it is in the “created” state, but it does exist as a static entity with all of its configuration settings defined. At this point, Docker reserves the storage volumes and network interfaces that the container needs, but the processes inside the container have not yet begun. The Started State The “started” or “running” state is the next stage of the lifecycle. When a container is started with the docker start command or an equivalent API call, it enters this stage. When a container is in the “started” state, its processes are launched and it starts running the service or application that is specified in its image. While they carry out their assigned tasks, containers in this state actively use CPU, memory, and other system resources. The Paused State Throughout their lifecycle, containers may also go into a “paused” state. When a container is paused with the docker pause command, its processes are suspended, thereby stopping its execution. A container that is paused keeps its resource allotments and configuration settings but is not in use. This state helps with resource conservation and debugging by momentarily stopping container execution without completely stopping it. The Exited State A container in the “exited” state has finished executing and has left its primary process. Containers can enter this state when they finish the tasks they are intended to complete or when they run into errors that force them to terminate. A container that has been “exited” stays stopped, keeping its resources and configuration settings but ceasing to run any processes. In this condition, containers can be completely deleted with the docker rm command or restarted with the docker start command. The Dead State A container that is in the “dead” state has either experienced an irreversible error or been abruptly terminated. Critical errors in the containerized application,
Docker – Public Repositories
Docker – Public Repositories ”; Previous Next Public repositories can be used to host Docker images which can be used by everyone else. An example is the images which are available in Docker Hub. Most of the images such as Centos, Ubuntu, and Jenkins are all publicly available for all. We can also make our images available by publishing it to the public repository on Docker Hub. For our example, we will use the myimage repository built in the “Building Docker Files” chapter and upload that image to Docker Hub. Let’s first review the images on our Docker host to see what we can push to the Docker registry. Here, we have our myimage:0.1 image which was created as a part of the “Building Docker Files” chapter. Let’s use this to upload to the Docker public repository. The following steps explain how you can upload an image to public repository. Step 1 − Log into Docker Hub and create your repository. This is the repository where your image will be stored. Go to https://hub.docker.com/ and log in with your credentials. Step 2 − Click the button “Create Repository” on the above screen and create a repository with the name demorep. Make sure that the visibility of the repository is public. Once the repository is created, make a note of the pull command which is attached to the repository. The pull command which will be used in our repository is as follows − docker pull demousr/demorep Step 3 − Now go back to the Docker Host. Here we need to tag our myimage to the new repository created in Docker Hub. We can do this via the Docker tag command. We will learn more about this tag command later in this chapter. Step 4 − Issue the Docker login command to login into the Docker Hub repository from the command prompt. The Docker login command will prompt you for the username and password to the Docker Hub repository. Step 5 − Once the image has been tagged, it’s now time to push the image to the Docker Hub repository. We can do this via the Docker push command. We will learn more about this command later in this chapter. docker tag This method allows one to tag an image to the relevant repository. Syntax docker tag imageID Repositoryname Options imageID − This is the ImageID which needs to be tagged to the repository. Repositoryname − This is the repository name to which the ImageID needs to be tagged to. Return Value None Example sudo docker tag ab0c1d3744dd demousr/demorep:1.0 Output A sample output of the above example is given below. docker push This method allows one to push images to the Docker Hub. Syntax docker push Repositoryname Options Repositoryname − This is the repository name which needs to be pushed to the Docker Hub. Return Value The long ID of the repository pushed to Docker Hub. Example sudo docker push demousr/demorep:1.0 Output If you go back to the Docker Hub page and go to your repository, you will see the tag name in the repository. Now let’s try to pull the repository we uploaded onto our Docker host. Let’s first delete the images, myimage:0.1 and demousr/demorep:1.0, from the local Docker host. Let’s use the Docker pull command to pull the repository from the Docker Hub. From the above screenshot, you can see that the Docker pull command has taken our new repository from the Docker Hub and placed it on our machine. Print Page Previous Next Advertisements ”;
Docker Setting – Python
How to Run Python in a Docker Container? ”; Previous Next Python has revolutionized the software development industry because of its simplicity, extensive set of libraries, and versatility. When projects scale along with increased complexities in the development and deployment environments, it becomes very difficult to manage the Python dependencies. Consequently, significant challenges arise in ensuring that the runtime is consistent across multiple environments. This is where running Python in Docker comes into the picture. Docker is a leading containerization platform that offers a streamlined approach to package, distribute, and run applications across different environments. Running Python in Docker comes with a lot of benefits – it enhances portability, dependency management, isolation, and scalability. Docker encapsulates Python applications with their dependencies in lightweight containers ensuring consistent behavior across development, testing, and production environments. The major ways to run Python inside Docker containers are − Use Dockerfiles with official Python Docker base images. Leverage Docker Compose to define and run multi-container Python Docker applications. Create a virtual environment within the Docker container to isolate Python dependencies. In this chapter, let’s discuss how to run Python in Docker containers using different ways with the help of a step-by-step approach, Docker commands, and examples. How to run Python inside Docker using Dockerfiles? Here’s a step-by-step process of running Python inside Docker with the help of Dockerfiles. Step 1: Create a Dockerfile Start by creating a Dockerfile in the project directory. The Dockerfile should contain the instruction to build the custom Docker image on top of the base Python image. Here’s an example Python Dockerfile. # Use the official Python image as the base image FROM python:3.9 # Set the working directory within the container WORKDIR /app # Copy the requirements.txt file into the container COPY requirements.txt /app/ # Install Python dependencies listed in requirements.txt RUN pip install -r requirements.txt # Copy the rest of the application code into the container COPY . /app # Specify the command to run when the container starts CMD [“python”, “app.py”] Step 2: Define Python Dependencies You can create a requirements.txt file if your Python application relies on external dependencies. This file should contain a list of all the dependencies along with the version that will be used by the Dockerfile to install while building the image. Flask==2.1.0 requests==2.26.0 Step 3: Build the Docker Image Next, navigate to the Dockerfile location inside the terminal and run the following Docker build command to build the Python Docker image. docker build -t my-python-app . `-t my-python-app` − The -t flag tags the Docker image with the name `my-python-app`. Step 4: Run the Docker Container Once you have successfully built the Docker image, you can run the Docker container for that image using the Docker run command. docker run -d -p 5000:5000 my-python-app `-d` − This flag detaches the container and helps you to run it in the background. `-p 5000:5000` − The -p flag maps port 5000 on the host machine to port 5000 inside the Docker container. You can adjust the port numbers as per your requirements. `my-python-app` − Here, you have to specify the name of the Docker image to be used for creating the container. Step 5: Access the Python Application If your Python application is running on a web server, you open a web browser and navigate to `http://localhost:5000` to access the web application. How to run Python using Docker Compose? Next, let’s understand how to run Python using Docker Compose. Docker Compose helps you to simplify multi-container Docker application management using a single YAML file. It lets you orchestrate services and streamline development workflows, ensuring consistency across environments. Step 1: Create Docker Compose Configuration Start by creating a docker-compose.yml in the project directory. In this file, you have to mention the services and their configurations. version: ”3” services: web: build: . ports: – “5000:5000″ `version: ”3”` − Specifies the version of the Docker Compose file format. `services` − Defines the services to be run by Docker Compose. `web` − Name of the service. `build: .` − Specifies the build context for the service, indicating that the Dockerfile is located in the current directory. `ports` − Maps ports between the host and the container. Step 2: Create a Dockerfile Next, create a Dockerfile in the project directory containing the instructions to build the Docker image. FROM python:3.9 WORKDIR /app COPY requirements.txt /app/ RUN pip install -r requirements.txt COPY . /app CMD [“python”, “app.py”] Step 3: Define Python Dependencies Mention your external dependencies in the requirements.txt file. Flask==2.1.0 requests==2.26.0 Step 4: Build and Run with Docker Compose The next step is to build and run using Docker Compose. Navigate to the directory containing the containing the `docker-compose.yml` file. Execute the following command to build and run the services defined in the Compose file − docker-compose up -d `-d` − It allows you to detach the containers and run them in the background. Step 5: Access the Python Application You can access your Python application web server by opening a web browser and navigating to `http://localhost:5000`. Step 6: Stopping the Services If you want to stop the services defined in the `docker-compose.yml` file, you can run the following command − docker-compose down This command will help you to stop and remove the containers, their networks, and volumes associated with the services. How to run Python in a virtual environment within the Docker? Next, if you want to run Python in a virtual environment within Docker, you can follow the below steps. Virtual environments
Docker – Setting Node.js
Docker – Setting Node.js ”; Previous Next Node.js is a JavaScript framework that is used for developing server-side applications. It is an open source framework that is developed to run on a variety of operating systems. Since Node.js is a popular framework for development, Docker has also ensured it has support for Node.js applications. We will now see the various steps for getting the Docker container for Node.js up and running. Step 1 − The first step is to pull the image from Docker Hub. When you log into Docker Hub, you will be able to search and see the image for Node.js as shown below. Just type in Node in the search box and click on the node (official) link which comes up in the search results. Step 2 − You will see that the Docker pull command for node in the details of the repository in Docker Hub. Step 3 − On the Docker Host, use the Docker pull command as shown above to download the latest node image from Docker Hub. Once the pull is complete, we can then proceed with the next step. Step 4 − On the Docker Host, let’s use the vim editor and create one Node.js example file. In this file, we will add a simple command to display “HelloWorld” to the command prompt. In the Node.js file, let’s add the following statement − Console.log(‘Hello World’); This will output the “Hello World” phrase when we run it through Node.js. Ensure that you save the file and then proceed to the next step. Step 5 − To run our Node.js script using the Node Docker container, we need to execute the following statement − sudo docker run –it –rm –name = HelloWorld –v “$PWD”:/usr/src/app –w /usr/src/app node node HelloWorld.js The following points need to be noted about the above command − The –rm option is used to remove the container after it is run. We are giving a name to the container called “HelloWorld”. We are mentioning to map the volume in the container which is /usr/src/app to our current present working directory. This is done so that the node container will pick up our HelloWorld.js script which is present in our working directory on the Docker Host. The –w option is used to specify the working directory used by Node.js. The first node option is used to specify to run the node image. The second node option is used to mention to run the node command in the node container. And finally we mention the name of our script. We will then get the following output. And from the output, we can clearly see that the Node container ran as a container and executed the HelloWorld.js script. Print Page Previous Next Advertisements ”;
Docker – Managing Ports
Docker – Managing Ports ”; Previous Next By design, Docker containers are isolated, keeping internal ports to themselves and not allowing response to the external. This configuration of the ports on the Docker host can be done while creating a container using the -p or –publish Docker flag; then, it allows the port to be published. This mapping makes applications running within the containers accessible, as they receive traffic from outside sources. Multiple port mappings can be enabled for one container, which caters to scenarios where various services are running within the same container. Additionally, Docker Compose abstracts the port-mapping complexity for multi-container applications. With a docker-compose.yml file defining all services and their port mappings, it becomes easier for Docker Compose to create and wire containers. This is done so that it automatically assigns unique ports in a way that does not cause conflicts, making communication between containers in an application stress-free. The ability to escape from conflicts and make communication seamless enables it to control ports effectively, hence being a very resourceful tool for enhancing the workflow in development to deployment in complex applications. It is a precious tool for managing containerized environments. In this chapter, let’s learn about managing Docker ports in detail. EXPOSE vs. PUBLISH: Understanding the Differences Both EXPOSE and PUBLISH (or -p) deal with ports in Docker, but they are two different things − EXPOSE EXPOSE acts as documentation regarding which ports a containerized application intends to use for communication. It is a directive in a Dockerfile that lets anyone building or running the container know what services it can potentially offer. But remember that EXPOSE alone does not make those container ports accessible outside the container; the directive itself more or less acts like a note for the developers or system administrators. PUBLISH This is the actual port mapping. When you PUBLISH a port, that is, when you include the -p in the docker run or the ports section in the docker-compose.yml, you are making an association between some PORT in the Docker container and the PORT on the Docker host. And that is what enables external traffic to access an application running inside a container, i.e., where the “intention” that you EXPOSE is made real. How to Expose a Port in Docker using PUBLISH? Docker proposes several ways by which it can be done, but the most straightforward and widely known is by using the -p when running a container. Below is an example − Basic Syntax The basic syntax for exposing a port when running a Docker container is − $ docker run -p <host_port>:<container_port> <image_name> <host_port> − This indicates the port number on the Docker host where you want to expose the application. <container_port> − The port number in the container on which your application listens for traffic. <image_name> − The name of the Docker image you want to run. Example: Public Web Server Port For example, you have an application configured to run a web server on port 80 in the container. You can map this to a local machine port, 8080, by doing − $ docker run -p 8080:80 <your_web_server_image> Now, you can open http://localhost:8080 with your favorite web browser and see your application being served! Publish Multiple Ports in Docker If your application requires multiple ports to be open, you can just add the -p flag more than once. $ docker run -p 8080:80 -p 4433:443 <your_app_image> This exposes port 80 (for HTTP) and port 443 (for HTTPS) on your host machine to the service. Publish Ports Using Docker Compose It”s pretty simple to maintain port mappings with Docker Compose for multi-container applications. You do this inside your docker-compose.yml file, under each service, in the ports section − services: web: image: <your_web_server_image> ports: – “8080:80″ db: image: <your_database_image> # … other configurations Key Considerations Port Conflicts − Ensure that a host port selected by yourself is not already being used by any other application or service within your system. Firewall − If Docker runs on a remote server, you may want to configure your firewall to enable traffic across the exposed ports. Security − Docker vulnerabilities are easily exposed – your port will be exposed, and attackers can breach the container. Consider using reverse proxies or other security measures to protect your containers. How to Expose a Port in Dockerfile? While the `EXPOSE` instruction in a Dockerfile does not publish the port, it provides information about the port that the container is expected to listen on at runtime. In reality, though, it documents the ports that”ll be used by your Docker image so that users know which ports they can consider publishing in the container. Here”s how to define it in your Dockerfile − The EXPOSE Instruction The syntax is simple − EXPOSE <port> [<port>/<protocol>] `<port>` − Port that you wish to expose. `<protocol>` − optional, with a default of TCP. May be TCP or UDP. Example: Exposing a Web Server Port In a Dockerfile for a web server image, you would have − # … other Dockerfile instructions EXPOSE 80 This informs anyone looking at your image that very probably the inside application is listening to an incoming connection on port 80, the standard HTTP port. Opening up multiple ports and protocols You can have more than one `EXPOSE` in your Dockerfile − EXPOSE 80 EXPOSE 443/tcp EXPOSE 443/udp This would mean your application uses TCP port 80 by default and TCP/UDP port 443. Key Points to Note `EXPOSE` is not necessary; however, it is good practice to document your container”s network usage. It doesn”t publish
Docker – Containers & Shells
Docker – Containers and Shells ”; Previous Next By default, when you launch a container, you will also use a shell command while launching the container as shown below. This is what we have seen in the earlier chapters when we were working with containers. In the above screenshot, you can observe that we have issued the following command − sudo docker run –it centos /bin/bash We used this command to create a new container and then used the Ctrl+P+Q command to exit out of the container. It ensures that the container still exists even after we exit from the container. We can verify that the container still exists with the Docker ps command. If we had to exit out of the container directly, then the container itself would be destroyed. Now there is an easier way to attach to containers and exit them cleanly without the need of destroying them. One way of achieving this is by using the nsenter command. Before we run the nsenter command, you need to first install the nsenter image. It can be done by using the following command − docker run –rm -v /usr/local/bin:/target jpetazzo/nsenter Before we use the nsenter command, we need to get the Process ID of the container, because this is required by the nsenter command. We can get the Process ID via the Docker inspect command and filtering it via the Pid. As seen in the above screenshot, we have first used the docker ps command to see the running containers. We can see that there is one running container with the ID of ef42a4c5e663. We then use the Docker inspect command to inspect the configuration of this container and then use the grep command to just filter the Process ID. And from the output, we can see that the Process ID is 2978. Now that we have the process ID, we can proceed forward and use the nsenter command to attach to the Docker container. nsenter This method allows one to attach to a container without exiting the container. Syntax nsenter –m –u –n –p –i –t containerID command Options -u is used to mention the Uts namespace -m is used to mention the mount namespace -n is used to mention the network namespace -p is used to mention the process namespace -i s to make the container run in interactive mode. -t is used to connect the I/O streams of the container to the host OS. containerID − This is the ID of the container. Command − This is the command to run within the container. Return Value None Example sudo nsenter –m –u –n –p –i –t 2978 /bin/bash Output From the output, we can observe the following points − The prompt changes to the bash shell directly when we issue the nsenter command. We then issue the exit command. Now normally if you did not use the nsenter command, the container would be destroyed. But you would notice that when we run the nsenter command, the container is still up and running. Print Page Previous Next Advertisements ”;
Docker – Container & Hosts
Docker – Container and Hosts ”; Previous Next The good thing about the Docker engine is that it is designed to work on various operating systems. We have already seen the installation on Windows and seen all the Docker commands on Linux systems. Now let’s see the various Docker commands on the Windows OS. Docker Images Let’s run the Docker images command on the Windows host. From here, we can see that we have two images − ubuntu and hello-world. Running a Container Now let’s run a container in the Windows Docker host. We can see that by running the container, we can now run the Ubuntu container on a Windows host. Listing All Containers Let’s list all the containers on the Windows host. Stopping a Container Let’s now stop a running container on the Windows host. So you can see that the Docker engine is pretty consistent when it comes to different Docker hosts and it works on Windows in the same way it works on Linux. Print Page Previous Next Advertisements ”;
Docker – Container Linking
Docker – Container Linking ”; Previous Next Container Linking allows multiple containers to link with each other. It is a better option than exposing ports. Let’s go step by step and learn how it works. Step 1 − Download the Jenkins image, if it is not already present, using the Jenkins pull command. Step 2 − Once the image is available, run the container, but this time, you can specify a name to the container by using the –-name option. This will be our source container. Step 3 − Next, it is time to launch the destination container, but this time, we will link it with our source container. For our destination container, we will use the standard Ubuntu image. When you do a docker ps, you will see both the containers running. Step 4 − Now, attach to the receiving container. Then run the env command. You will notice new variables for linking with the source container. Print Page Previous Next Advertisements ”;