Docker – Containers and Shells ”; Previous Next By default, when you launch a container, you will also use a shell command while launching the container as shown below. This is what we have seen in the earlier chapters when we were working with containers. In the above screenshot, you can observe that we have issued the following command − sudo docker run –it centos /bin/bash We used this command to create a new container and then used the Ctrl+P+Q command to exit out of the container. It ensures that the container still exists even after we exit from the container. We can verify that the container still exists with the Docker ps command. If we had to exit out of the container directly, then the container itself would be destroyed. Now there is an easier way to attach to containers and exit them cleanly without the need of destroying them. One way of achieving this is by using the nsenter command. Before we run the nsenter command, you need to first install the nsenter image. It can be done by using the following command − docker run –rm -v /usr/local/bin:/target jpetazzo/nsenter Before we use the nsenter command, we need to get the Process ID of the container, because this is required by the nsenter command. We can get the Process ID via the Docker inspect command and filtering it via the Pid. As seen in the above screenshot, we have first used the docker ps command to see the running containers. We can see that there is one running container with the ID of ef42a4c5e663. We then use the Docker inspect command to inspect the configuration of this container and then use the grep command to just filter the Process ID. And from the output, we can see that the Process ID is 2978. Now that we have the process ID, we can proceed forward and use the nsenter command to attach to the Docker container. nsenter This method allows one to attach to a container without exiting the container. Syntax nsenter –m –u –n –p –i –t containerID command Options -u is used to mention the Uts namespace -m is used to mention the mount namespace -n is used to mention the network namespace -p is used to mention the process namespace -i s to make the container run in interactive mode. -t is used to connect the I/O streams of the container to the host OS. containerID − This is the ID of the container. Command − This is the command to run within the container. Return Value None Example sudo nsenter –m –u –n –p –i –t 2978 /bin/bash Output From the output, we can observe the following points − The prompt changes to the bash shell directly when we issue the nsenter command. We then issue the exit command. Now normally if you did not use the nsenter command, the container would be destroyed. But you would notice that when we run the nsenter command, the container is still up and running. Print Page Previous Next Advertisements ”;
Category: docker
Docker – Container & Hosts
Docker – Container and Hosts ”; Previous Next The good thing about the Docker engine is that it is designed to work on various operating systems. We have already seen the installation on Windows and seen all the Docker commands on Linux systems. Now let’s see the various Docker commands on the Windows OS. Docker Images Let’s run the Docker images command on the Windows host. From here, we can see that we have two images − ubuntu and hello-world. Running a Container Now let’s run a container in the Windows Docker host. We can see that by running the container, we can now run the Ubuntu container on a Windows host. Listing All Containers Let’s list all the containers on the Windows host. Stopping a Container Let’s now stop a running container on the Windows host. So you can see that the Docker engine is pretty consistent when it comes to different Docker hosts and it works on Windows in the same way it works on Linux. Print Page Previous Next Advertisements ”;
Docker – Container Linking
Docker – Container Linking ”; Previous Next Container Linking allows multiple containers to link with each other. It is a better option than exposing ports. Let’s go step by step and learn how it works. Step 1 − Download the Jenkins image, if it is not already present, using the Jenkins pull command. Step 2 − Once the image is available, run the container, but this time, you can specify a name to the container by using the –-name option. This will be our source container. Step 3 − Next, it is time to launch the destination container, but this time, we will link it with our source container. For our destination container, we will use the standard Ubuntu image. When you do a docker ps, you will see both the containers running. Step 4 − Now, attach to the receiving container. Then run the env command. You will notice new variables for linking with the source container. Print Page Previous Next Advertisements ”;
Docker Setting – Java
How to Run Java in a Docker Container? ”; Previous Next Docker allows you to set up Java environments that you can use in production and development servers alike. It enhances the efficiency and the manageability of executing Java programs. Irrespective of the underlying configurations, environment, and dependencies, Docker allows you to run Java programs reliably, and consistently across all the platforms. It greatly simplifies the deployment procedure as well and the problem of “it works only on my machine” is resolved. You can run Java in Docker containers using the two main approaches discussed below − You can use the official Java Docker base images provided by Oracle or AdoptOpenJDK. You can create your own Docker images with custom dependencies tailored specifically for Java applications by using Dockerfiles. In this chapter, we will explain both of these approaches to create and run Java applications inside the Docker container with the help of step-by-step instructions, commands, and examples. Benefits of Using Docker Containers to Run Java Applications There are several benefits associated with running Java applications inside Docker containers. They enhance the development and deployment workflows and improve scalability and reliability. Here are some of the key advantages of using Docker containers for Java applications. Isolation − Ensures independent operation. Consistency − Maintains uniform runtime environments. Portability − Facilitates easy migration between environments. Resource Efficiency − Maximizes resource utilization. Scalability − Allows seamless adjustment to workload demands. How to Run Java in Docker Using Java Base Images? One of the easiest ways to run Java in Docker is by using the existing Java base images provided by trusted organizations like Oracle or AdoptOpenJDK. To do so, here are the steps and commands. Step 1: Pull the Java Base Image You can start by pulling the Java base image from Docker Hub using the Docker pull command. For example, if you want to pull the OpenJDK 11 image from AdoptOpenJDK, you can use the following command. docker pull adoptopenjdk/openjdk11 Step 2: Run the Docker Container Now that you have the base image pulled to your local, you can run the Docker container using the pulled image. You can specify the Java application JAR that you want to run inside the container and copy it to the container. To do so, you can use the following command. docker run -d –name my-java-container -v /path/to/your/jar:/usr/src/app/my-java-app.jar adoptopenjdk/openjdk11 In this command − -d − This flag helps you to detach the container and you can run it in the background. –name my-java-container − You can assign a name to the running container for your reference using this flag. -v /path/to/your/jar:/usr/src/app/my-java-app.jar − This flag helps you to mount the directory that contains your Java application JAR into the container at /usr/src/app/my-java-app.jar. adoptopenjdk/openjdk11 − This is the name of the base image that you want to pull. Step 3: Access the Container”s Bash If you want to check whether Java is installed in the container that you created using the Docker image, you can access the bash shell of the container. To do so, you can use the following command. docker exec -it my-java-container /bin/bash In this command − docker exec − It lets you execute a command inside a running container. -it − It allocates a pseudo-TTY and helps keep the stdin open. This allows you to interact with the container”s bash shell. my-java-container − This is the name of the running container. /bin/bash − This specifies the command that you want to execute inside the container. This command opens a bash shell inside the container. Step 4: Check Java Installation Now that you have access to the bash shell of the container, you can check if Java is installed by running the below command. java -version This command is used to display the version of Java and JDK that has been installed in the container. On running this command, if you see the information related to the Java version, it means Java is installed properly in the container. Now that you have verified the successful installation of Java inside the container, you can exit the bash of the container by typing “exit” and pressing the enter key. How to Use Dockerfile to Create Custom Java Images? You can define your specific environment and run configurations for your Java applications using a Dockerfile to create and build Docker images. Here are the steps that you can follow to create a custom Docker image using Dockerfile. Step 1: Create a Dockerfile First, create a Dockerfile in the directory of your Java application. In the Dockerfile, we will mention the instructions to build image layers. Here’s a Dockerfile for a Docker image that has Java pre-installed in it − # Use a base Java image FROM adoptopenjdk/openjdk11:latest # Set the working directory inside the container WORKDIR /usr/src/app # Copy the Java application JAR file into the container COPY target/my-java-app.jar . # Expose the port on which your Java application runs (if applicable) EXPOSE 8080 # Command to run the Java application CMD [“java”, “-jar”, “my-java-app.jar”] In this Dockerfile − FROM − You can use FROM to specify the base image to be used. In this case, we have used OpenJDK 11 from AdoptOpenJDK as the base image. WORKDIR − This directive helps you to set the default working directory inside the Docker container where all the subsequent commands will be run. COPY − It helps you to copy the Java application JAR file from your local directory into the container. EXPOSE − This directive helps you to expose a particular port of the Docker container.
Docker – Cloud
Docker – Cloud ”; Previous Next The Docker Cloud is a service provided by Docker in which you can carry out the following operations − Nodes − You can connect the Docker Cloud to your existing cloud providers such as Azure and AWS to spin up containers on these environments. Cloud Repository − Provides a place where you can store your own repositories. Continuous Integration − Connect with Github and build a continuous integration pipeline. Application Deployment − Deploy and scale infrastructure and containers. Continuous Deployment − Can automate deployments. Getting started You can go to the following link to getting started with Docker Cloud − https://cloud.docker.com/ Once logged in, you will be provided with the following basic interface − Connecting to the Cloud Provider The first step is to connect to an existing cloud provider. The following steps will show you how to connect with an Amazon Cloud provider. Step 1 − The first step is to ensure that you have the right AWS keys. This can be taken from the aws console. Log into your aws account using the following link − https://aws.amazon.com/console/ Step 2 − Once logged in, go to the Security Credentials section. Make a note of the access keys which will be used from Docker Hub. Step 3 − Next, you need to create a policy in aws that will allow Docker to view EC2 instances. Go to the profiles section in aws. Click the Create Policy button. Step 4 − Click on ‘Create Your Own Policy’ and give the policy name as dockercloudpolicy and the policy definition as shown below. { “Version”: “2012-10-17”, “Statement”: [ { “Action”: [ “ec2:*”, “iam:ListInstanceProfiles” ], “Effect”: “Allow”, “Resource”: “*” } ] } Next, click the Create Policy button Step 5 − Next, you need to create a role which will be used by Docker to spin up nodes on AWS. For this, go to the Roles section in AWS and click the Create New Role option. Step 6 − Give the name for the role as dockercloud-role. Step 7 − On the next screen, go to ‘Role for Cross Account Access’ and select “Provide access between your account and a 3rd party AWS account”. Step 8 − On the next screen, enter the following details − In the Account ID field, enter the ID for the Docker Cloud service: 689684103426. In the External ID field, enter your Docker Cloud username. Step 9 − Then, click the Next Step button and on the next screen, attach the policy which was created in the earlier step. Step 10 − Finally, on the last screen when the role is created, make sure to copy the arn role which is created. arn:aws:iam::085363624145:role/dockercloud-role Step 11 − Now go back to Docker Cloud, select Cloud Providers, and click the plug symbol next to Amazon Web Services. Enter the arn role and click the Save button. Once saved, the integration with AWS would be complete. Setting Up Nodes Once the integration with AWS is complete, the next step is to setup a node. Go to the Nodes section in Docker Cloud. Note that the setting up of nodes will automatically setup a node cluster first. Step 1 − Go to the Nodes section in Docker Cloud. Step 2 − Next, you can give the details of the nodes which will be setup in AWS. You can then click the Launch Node cluster which will be present at the bottom of the screen. Once the node is deployed, you will get the notification in the Node Cluster screen. Deploying a Service The next step after deploying a node is to deploy a service. To do this, we need to perform the following steps. Step 1 − Go to the Services Section in Docker Cloud. Click the Create button Step 2 − Choose the Service which is required. In our case, let’s choose mongo. Step 3 − On the next screen, choose the Create & Deploy option. This will start deploying the Mongo container on your node cluster. Once deployed, you will be able to see the container in a running state. Print Page Previous Next Advertisements ”;
Docker – Continuous Integration ”; Previous Next Docker has integrations with many Continuous Integrations tools, which also includes the popular CI tool known as Jenkins. Within Jenkins, you have plugins available which can be used to work with containers. So let’s quickly look at a Docker plugin available for the Jenkins tool. Let’s go step by step and see what’s available in Jenkins for Docker containers. Step 1 − Go to your Jenkins dashboard and click Manage Jenkins. Step 2 − Go to Manage Plugins. Step 3 − Search for Docker plugins. Choose the Docker plugin and click the Install without restart button. Step 4 − Once the installation is completed, go to your job in the Jenkins dashboard. In our example, we have a job called Demo. Step 5 − In the job, when you go to the Build step, you can now see the option to start and stop containers. Step 6 − As a simple example, you can choose the further option to stop containers when the build is completed. Then, click the Save button. Now, just run your job in Jenkins. In the Console output, you will now be able to see that the command to Stop All containers has run. Print Page Previous Next Advertisements ”;
Docker – Working with Containers ”; Previous Next In this chapter, we will explore in detail what we can do with containers. docker top With this command, you can see the top processes within a container. Syntax docker top ContainerID Options ContainerID − This is the Container ID for which you want to see the top processes. Return Value The output will show the top-level processes within a container. Example sudo docker top 9f215ed0b0d3 The above command will show the top-level processes within a container. Output When we run the above command, it will produce the following result − docker stop This command is used to stop a running container. Syntax docker stop ContainerID Options ContainerID − This is the Container ID which needs to be stopped. Return Value The output will give the ID of the stopped container. Example sudo docker stop 9f215ed0b0d3 The above command will stop the Docker container 9f215ed0b0d3. Output When we run the above command, it will produce the following result − docker rm This command is used to delete a container. Syntax docker rm ContainerID Options ContainerID − This is the Container ID which needs to be removed. Return Value The output will give the ID of the removed container. Example sudo docker rm 9f215ed0b0d3 The above command will remove the Docker container 9f215ed0b0d3. Output When we run the above command, it will produce the following result − docker stats This command is used to provide the statistics of a running container. Syntax docker stats ContainerID Options ContainerID − This is the Container ID for which the stats need to be provided. Return Value The output will show the CPU and Memory utilization of the Container. Example sudo docker stats 9f215ed0b0d3 The above command will provide CPU and memory utilization of the Container 9f215ed0b0d3. Output When we run the above command, it will produce the following result − docker attach This command is used to attach to a running container. Syntax docker attach ContainerID Options ContainerID − This is the Container ID to which you need to attach. Return Value None Example sudo docker attach 07b0b6f434fe The above command will attach to the Docker container 07b0b6f434fe. Output When we run the above command, it will produce the following result − Once you have attached to the Docker container, you can run the above command to see the process utilization in that Docker container. docker pause This command is used to pause the processes in a running container. Syntax docker pause ContainerID Options ContainerID − This is the Container ID to which you need to pause the processes in the container. Return Value The ContainerID of the paused container. Example sudo docker pause 07b0b6f434fe The above command will pause the processes in a running container 07b0b6f434fe. Output When we run the above command, it will produce the following result − docker unpause This command is used to unpause the processes in a running container. Syntax docker unpause ContainerID Options ContainerID − This is the Container ID to which you need to unpause the processes in the container. Return Value The ContainerID of the running container. Example sudo docker unpause 07b0b6f434fe The above command will unpause the processes in a running container: 07b0b6f434fe Output When we run the above command, it will produce the following result − docker kill This command is used to kill the processes in a running container. Syntax docker kill ContainerID Options ContainerID − This is the Container ID to which you need to kill the processes in the container. Return Value The ContainerID of the running container. Example sudo docker kill 07b0b6f434fe The above command will kill the processes in the running container 07b0b6f434fe. Output When we run the above command, it will produce the following result − Docker – Container Lifecycle The following illustration explains the entire lifecycle of a Docker container. Initially, the Docker container will be in the created state. Then the Docker container goes into the running state when the Docker run command is used. The Docker kill command is used to kill an existing Docker container. The Docker pause command is used to pause an existing Docker container. The Docker stop command is used to pause an existing Docker container. The Docker run command is used to put a container back from a stopped state to a running state. Print Page Previous Next Advertisements ”;
Docker – Containers
Docker – Containers ”; Previous Next A Docker container is a runtime instance of a Docker image. They can be created by instantiating the image. Docker containers are completely changing software development, deployment, and management. In essence, Docker containers bundle an application along with all of its dependencies into a compact, light package. They can operate reliably in a range of computing environments by using virtualization at the operating system level. This encapsulation is accomplished through the use of Docker images. Images are essentially blueprints that contain all the files, libraries, and configurations required to run a particular application. Since containers isolate the application and its dependencies from the underlying system, they offer consistency and predictability across a range of environments. Docker Containers function as independent processes with their filesystem, network interface, and resources, but they are lightweight and efficient because they share the same kernel as the host operating system. They rely on key components of the Docker ecosystem to work, including the Docker Engine, which builds, launches, and manages containers, and the Docker Registry, which serves as a repository for Docker images. In this chapter, let’s understand how containers work and the important Docker container commands that you will you most frequently. Key Concepts of Docker Containers Here are the key concepts and principles that work behind Docker Containers. Containerization Essentially, Containers function based on the concept of containerization, which is packing an application together with all of its dependencies into a single package. This package, referred to as a container image, includes all of the necessary runtime environments, libraries, and other components needed to run the application. Isolation Operating system-level virtualization is used by Docker containers to offer application isolation. With its filesystem, network interface, and process space, each container operates independently of the host system as a separate process. By maintaining their independence from one another, containers are kept from interfering with one another”s operations thanks to this isolation. Docker Engine The Docker Engine is the brains behind Docker containers; it builds, launches, and maintains them. The Docker daemon, which operates in the background, and the Docker client, which lets users communicate with the Docker daemon via commands, are two of the parts that make up the Docker Engine. Image and Container Lifecycle The creation of a container image is the first step in the lifecycle of a Docker container. A Dockerfile, which outlines the application”s dependencies and configuration, is used to build this image. The image can be used to instantiate containers, which are instances of the image that are running after it has been created. It is possible to start, stop, pause, and restart containers as one. Resource Management Docker containers provide effective resource management because of their shared kernel architecture and lightweight design. Since containers share the operating system kernel of the host system, overhead is decreased and startup times are accelerated. To ensure maximum performance and scalability, Docker also offers tools for resource usage monitoring and control. Portability One of the main benefits of Docker containers is their portability. Container images are self-contained units that are easily deployable and distributed throughout various environments, ranging from production to testing and development. This portability streamlines the deployment process and lowers the possibility of compatibility problems by enabling “build once, run anywhere”. Docker Container Lifecycle There are five essential phases in the Docker container lifecycle: created, started, paused, exited, and dead. The lifecycle of a container is represented by its stages, which range from creation and execution to termination and possible recovery. Comprehending these phases is crucial for proficiently overseeing Docker containers and guaranteeing their appropriate operation in a containerized setting. Let”s explore the stages of the Docker container lifecycle: The Created State The “created” state is the first stage. When a container is created with the docker create command or a comparable API call, it reaches this phase. The container is not yet running when it is in the “created” state, but it does exist as a static entity with all of its configuration settings defined. At this point, Docker reserves the storage volumes and network interfaces that the container needs, but the processes inside the container have not yet begun. The Started State The “started” or “running” state is the next stage of the lifecycle. When a container is started with the docker start command or an equivalent API call, it enters this stage. When a container is in the “started” state, its processes are launched and it starts running the service or application that is specified in its image. While they carry out their assigned tasks, containers in this state actively use CPU, memory, and other system resources. The Paused State Throughout their lifecycle, containers may also go into a “paused” state. When a container is paused with the docker pause command, its processes are suspended, thereby stopping its execution. A container that is paused keeps its resource allotments and configuration settings but is not in use. This state helps with resource conservation and debugging by momentarily stopping container execution without completely stopping it. The Exited State A container in the “exited” state has finished executing and has left its primary process. Containers can enter this state when they finish the tasks they are intended to complete or when they run into errors that force them to terminate. A container that has been “exited” stays stopped, keeping its resources and configuration settings but ceasing to run any processes. In this condition, containers can be completely deleted with the docker rm command or restarted with the docker start command. The Dead State A container that is in the “dead” state has either experienced an irreversible error or been abruptly terminated. Critical errors in the containerized application,
Docker – Public Repositories
Docker – Public Repositories ”; Previous Next Public repositories can be used to host Docker images which can be used by everyone else. An example is the images which are available in Docker Hub. Most of the images such as Centos, Ubuntu, and Jenkins are all publicly available for all. We can also make our images available by publishing it to the public repository on Docker Hub. For our example, we will use the myimage repository built in the “Building Docker Files” chapter and upload that image to Docker Hub. Let’s first review the images on our Docker host to see what we can push to the Docker registry. Here, we have our myimage:0.1 image which was created as a part of the “Building Docker Files” chapter. Let’s use this to upload to the Docker public repository. The following steps explain how you can upload an image to public repository. Step 1 − Log into Docker Hub and create your repository. This is the repository where your image will be stored. Go to https://hub.docker.com/ and log in with your credentials. Step 2 − Click the button “Create Repository” on the above screen and create a repository with the name demorep. Make sure that the visibility of the repository is public. Once the repository is created, make a note of the pull command which is attached to the repository. The pull command which will be used in our repository is as follows − docker pull demousr/demorep Step 3 − Now go back to the Docker Host. Here we need to tag our myimage to the new repository created in Docker Hub. We can do this via the Docker tag command. We will learn more about this tag command later in this chapter. Step 4 − Issue the Docker login command to login into the Docker Hub repository from the command prompt. The Docker login command will prompt you for the username and password to the Docker Hub repository. Step 5 − Once the image has been tagged, it’s now time to push the image to the Docker Hub repository. We can do this via the Docker push command. We will learn more about this command later in this chapter. docker tag This method allows one to tag an image to the relevant repository. Syntax docker tag imageID Repositoryname Options imageID − This is the ImageID which needs to be tagged to the repository. Repositoryname − This is the repository name to which the ImageID needs to be tagged to. Return Value None Example sudo docker tag ab0c1d3744dd demousr/demorep:1.0 Output A sample output of the above example is given below. docker push This method allows one to push images to the Docker Hub. Syntax docker push Repositoryname Options Repositoryname − This is the repository name which needs to be pushed to the Docker Hub. Return Value The long ID of the repository pushed to Docker Hub. Example sudo docker push demousr/demorep:1.0 Output If you go back to the Docker Hub page and go to your repository, you will see the tag name in the repository. Now let’s try to pull the repository we uploaded onto our Docker host. Let’s first delete the images, myimage:0.1 and demousr/demorep:1.0, from the local Docker host. Let’s use the Docker pull command to pull the repository from the Docker Hub. From the above screenshot, you can see that the Docker pull command has taken our new repository from the Docker Hub and placed it on our machine. Print Page Previous Next Advertisements ”;
Docker Setting – Python
How to Run Python in a Docker Container? ”; Previous Next Python has revolutionized the software development industry because of its simplicity, extensive set of libraries, and versatility. When projects scale along with increased complexities in the development and deployment environments, it becomes very difficult to manage the Python dependencies. Consequently, significant challenges arise in ensuring that the runtime is consistent across multiple environments. This is where running Python in Docker comes into the picture. Docker is a leading containerization platform that offers a streamlined approach to package, distribute, and run applications across different environments. Running Python in Docker comes with a lot of benefits – it enhances portability, dependency management, isolation, and scalability. Docker encapsulates Python applications with their dependencies in lightweight containers ensuring consistent behavior across development, testing, and production environments. The major ways to run Python inside Docker containers are − Use Dockerfiles with official Python Docker base images. Leverage Docker Compose to define and run multi-container Python Docker applications. Create a virtual environment within the Docker container to isolate Python dependencies. In this chapter, let’s discuss how to run Python in Docker containers using different ways with the help of a step-by-step approach, Docker commands, and examples. How to run Python inside Docker using Dockerfiles? Here’s a step-by-step process of running Python inside Docker with the help of Dockerfiles. Step 1: Create a Dockerfile Start by creating a Dockerfile in the project directory. The Dockerfile should contain the instruction to build the custom Docker image on top of the base Python image. Here’s an example Python Dockerfile. # Use the official Python image as the base image FROM python:3.9 # Set the working directory within the container WORKDIR /app # Copy the requirements.txt file into the container COPY requirements.txt /app/ # Install Python dependencies listed in requirements.txt RUN pip install -r requirements.txt # Copy the rest of the application code into the container COPY . /app # Specify the command to run when the container starts CMD [“python”, “app.py”] Step 2: Define Python Dependencies You can create a requirements.txt file if your Python application relies on external dependencies. This file should contain a list of all the dependencies along with the version that will be used by the Dockerfile to install while building the image. Flask==2.1.0 requests==2.26.0 Step 3: Build the Docker Image Next, navigate to the Dockerfile location inside the terminal and run the following Docker build command to build the Python Docker image. docker build -t my-python-app . `-t my-python-app` − The -t flag tags the Docker image with the name `my-python-app`. Step 4: Run the Docker Container Once you have successfully built the Docker image, you can run the Docker container for that image using the Docker run command. docker run -d -p 5000:5000 my-python-app `-d` − This flag detaches the container and helps you to run it in the background. `-p 5000:5000` − The -p flag maps port 5000 on the host machine to port 5000 inside the Docker container. You can adjust the port numbers as per your requirements. `my-python-app` − Here, you have to specify the name of the Docker image to be used for creating the container. Step 5: Access the Python Application If your Python application is running on a web server, you open a web browser and navigate to `http://localhost:5000` to access the web application. How to run Python using Docker Compose? Next, let’s understand how to run Python using Docker Compose. Docker Compose helps you to simplify multi-container Docker application management using a single YAML file. It lets you orchestrate services and streamline development workflows, ensuring consistency across environments. Step 1: Create Docker Compose Configuration Start by creating a docker-compose.yml in the project directory. In this file, you have to mention the services and their configurations. version: ”3” services: web: build: . ports: – “5000:5000″ `version: ”3”` − Specifies the version of the Docker Compose file format. `services` − Defines the services to be run by Docker Compose. `web` − Name of the service. `build: .` − Specifies the build context for the service, indicating that the Dockerfile is located in the current directory. `ports` − Maps ports between the host and the container. Step 2: Create a Dockerfile Next, create a Dockerfile in the project directory containing the instructions to build the Docker image. FROM python:3.9 WORKDIR /app COPY requirements.txt /app/ RUN pip install -r requirements.txt COPY . /app CMD [“python”, “app.py”] Step 3: Define Python Dependencies Mention your external dependencies in the requirements.txt file. Flask==2.1.0 requests==2.26.0 Step 4: Build and Run with Docker Compose The next step is to build and run using Docker Compose. Navigate to the directory containing the containing the `docker-compose.yml` file. Execute the following command to build and run the services defined in the Compose file − docker-compose up -d `-d` − It allows you to detach the containers and run them in the background. Step 5: Access the Python Application You can access your Python application web server by opening a web browser and navigating to `http://localhost:5000`. Step 6: Stopping the Services If you want to stop the services defined in the `docker-compose.yml` file, you can run the following command − docker-compose down This command will help you to stop and remove the containers, their networks, and volumes associated with the services. How to run Python in a virtual environment within the Docker? Next, if you want to run Python in a virtual environment within Docker, you can follow the below steps. Virtual environments