Docker – Containers ”; Previous Next A Docker container is a runtime instance of a Docker image. They can be created by instantiating the image. Docker containers are completely changing software development, deployment, and management. In essence, Docker containers bundle an application along with all of its dependencies into a compact, light package. They can operate reliably in a range of computing environments by using virtualization at the operating system level. This encapsulation is accomplished through the use of Docker images. Images are essentially blueprints that contain all the files, libraries, and configurations required to run a particular application. Since containers isolate the application and its dependencies from the underlying system, they offer consistency and predictability across a range of environments. Docker Containers function as independent processes with their filesystem, network interface, and resources, but they are lightweight and efficient because they share the same kernel as the host operating system. They rely on key components of the Docker ecosystem to work, including the Docker Engine, which builds, launches, and manages containers, and the Docker Registry, which serves as a repository for Docker images. In this chapter, let’s understand how containers work and the important Docker container commands that you will you most frequently. Key Concepts of Docker Containers Here are the key concepts and principles that work behind Docker Containers. Containerization Essentially, Containers function based on the concept of containerization, which is packing an application together with all of its dependencies into a single package. This package, referred to as a container image, includes all of the necessary runtime environments, libraries, and other components needed to run the application. Isolation Operating system-level virtualization is used by Docker containers to offer application isolation. With its filesystem, network interface, and process space, each container operates independently of the host system as a separate process. By maintaining their independence from one another, containers are kept from interfering with one another”s operations thanks to this isolation. Docker Engine The Docker Engine is the brains behind Docker containers; it builds, launches, and maintains them. The Docker daemon, which operates in the background, and the Docker client, which lets users communicate with the Docker daemon via commands, are two of the parts that make up the Docker Engine. Image and Container Lifecycle The creation of a container image is the first step in the lifecycle of a Docker container. A Dockerfile, which outlines the application”s dependencies and configuration, is used to build this image. The image can be used to instantiate containers, which are instances of the image that are running after it has been created. It is possible to start, stop, pause, and restart containers as one. Resource Management Docker containers provide effective resource management because of their shared kernel architecture and lightweight design. Since containers share the operating system kernel of the host system, overhead is decreased and startup times are accelerated. To ensure maximum performance and scalability, Docker also offers tools for resource usage monitoring and control. Portability One of the main benefits of Docker containers is their portability. Container images are self-contained units that are easily deployable and distributed throughout various environments, ranging from production to testing and development. This portability streamlines the deployment process and lowers the possibility of compatibility problems by enabling “build once, run anywhere”. Docker Container Lifecycle There are five essential phases in the Docker container lifecycle: created, started, paused, exited, and dead. The lifecycle of a container is represented by its stages, which range from creation and execution to termination and possible recovery. Comprehending these phases is crucial for proficiently overseeing Docker containers and guaranteeing their appropriate operation in a containerized setting. Let”s explore the stages of the Docker container lifecycle: The Created State The “created” state is the first stage. When a container is created with the docker create command or a comparable API call, it reaches this phase. The container is not yet running when it is in the “created” state, but it does exist as a static entity with all of its configuration settings defined. At this point, Docker reserves the storage volumes and network interfaces that the container needs, but the processes inside the container have not yet begun. The Started State The “started” or “running” state is the next stage of the lifecycle. When a container is started with the docker start command or an equivalent API call, it enters this stage. When a container is in the “started” state, its processes are launched and it starts running the service or application that is specified in its image. While they carry out their assigned tasks, containers in this state actively use CPU, memory, and other system resources. The Paused State Throughout their lifecycle, containers may also go into a “paused” state. When a container is paused with the docker pause command, its processes are suspended, thereby stopping its execution. A container that is paused keeps its resource allotments and configuration settings but is not in use. This state helps with resource conservation and debugging by momentarily stopping container execution without completely stopping it. The Exited State A container in the “exited” state has finished executing and has left its primary process. Containers can enter this state when they finish the tasks they are intended to complete or when they run into errors that force them to terminate. A container that has been “exited” stays stopped, keeping its resources and configuration settings but ceasing to run any processes. In this condition, containers can be completely deleted with the docker rm command or restarted with the docker start command. The Dead State A container that is in the “dead” state has either experienced an irreversible error or been abruptly terminated. Critical errors in the containerized application,
Category: docker
Docker – Public Repositories
Docker – Public Repositories ”; Previous Next Public repositories can be used to host Docker images which can be used by everyone else. An example is the images which are available in Docker Hub. Most of the images such as Centos, Ubuntu, and Jenkins are all publicly available for all. We can also make our images available by publishing it to the public repository on Docker Hub. For our example, we will use the myimage repository built in the “Building Docker Files” chapter and upload that image to Docker Hub. Let’s first review the images on our Docker host to see what we can push to the Docker registry. Here, we have our myimage:0.1 image which was created as a part of the “Building Docker Files” chapter. Let’s use this to upload to the Docker public repository. The following steps explain how you can upload an image to public repository. Step 1 − Log into Docker Hub and create your repository. This is the repository where your image will be stored. Go to https://hub.docker.com/ and log in with your credentials. Step 2 − Click the button “Create Repository” on the above screen and create a repository with the name demorep. Make sure that the visibility of the repository is public. Once the repository is created, make a note of the pull command which is attached to the repository. The pull command which will be used in our repository is as follows − docker pull demousr/demorep Step 3 − Now go back to the Docker Host. Here we need to tag our myimage to the new repository created in Docker Hub. We can do this via the Docker tag command. We will learn more about this tag command later in this chapter. Step 4 − Issue the Docker login command to login into the Docker Hub repository from the command prompt. The Docker login command will prompt you for the username and password to the Docker Hub repository. Step 5 − Once the image has been tagged, it’s now time to push the image to the Docker Hub repository. We can do this via the Docker push command. We will learn more about this command later in this chapter. docker tag This method allows one to tag an image to the relevant repository. Syntax docker tag imageID Repositoryname Options imageID − This is the ImageID which needs to be tagged to the repository. Repositoryname − This is the repository name to which the ImageID needs to be tagged to. Return Value None Example sudo docker tag ab0c1d3744dd demousr/demorep:1.0 Output A sample output of the above example is given below. docker push This method allows one to push images to the Docker Hub. Syntax docker push Repositoryname Options Repositoryname − This is the repository name which needs to be pushed to the Docker Hub. Return Value The long ID of the repository pushed to Docker Hub. Example sudo docker push demousr/demorep:1.0 Output If you go back to the Docker Hub page and go to your repository, you will see the tag name in the repository. Now let’s try to pull the repository we uploaded onto our Docker host. Let’s first delete the images, myimage:0.1 and demousr/demorep:1.0, from the local Docker host. Let’s use the Docker pull command to pull the repository from the Docker Hub. From the above screenshot, you can see that the Docker pull command has taken our new repository from the Docker Hub and placed it on our machine. Print Page Previous Next Advertisements ”;
Docker Setting – Python
How to Run Python in a Docker Container? ”; Previous Next Python has revolutionized the software development industry because of its simplicity, extensive set of libraries, and versatility. When projects scale along with increased complexities in the development and deployment environments, it becomes very difficult to manage the Python dependencies. Consequently, significant challenges arise in ensuring that the runtime is consistent across multiple environments. This is where running Python in Docker comes into the picture. Docker is a leading containerization platform that offers a streamlined approach to package, distribute, and run applications across different environments. Running Python in Docker comes with a lot of benefits – it enhances portability, dependency management, isolation, and scalability. Docker encapsulates Python applications with their dependencies in lightweight containers ensuring consistent behavior across development, testing, and production environments. The major ways to run Python inside Docker containers are − Use Dockerfiles with official Python Docker base images. Leverage Docker Compose to define and run multi-container Python Docker applications. Create a virtual environment within the Docker container to isolate Python dependencies. In this chapter, let’s discuss how to run Python in Docker containers using different ways with the help of a step-by-step approach, Docker commands, and examples. How to run Python inside Docker using Dockerfiles? Here’s a step-by-step process of running Python inside Docker with the help of Dockerfiles. Step 1: Create a Dockerfile Start by creating a Dockerfile in the project directory. The Dockerfile should contain the instruction to build the custom Docker image on top of the base Python image. Here’s an example Python Dockerfile. # Use the official Python image as the base image FROM python:3.9 # Set the working directory within the container WORKDIR /app # Copy the requirements.txt file into the container COPY requirements.txt /app/ # Install Python dependencies listed in requirements.txt RUN pip install -r requirements.txt # Copy the rest of the application code into the container COPY . /app # Specify the command to run when the container starts CMD [“python”, “app.py”] Step 2: Define Python Dependencies You can create a requirements.txt file if your Python application relies on external dependencies. This file should contain a list of all the dependencies along with the version that will be used by the Dockerfile to install while building the image. Flask==2.1.0 requests==2.26.0 Step 3: Build the Docker Image Next, navigate to the Dockerfile location inside the terminal and run the following Docker build command to build the Python Docker image. docker build -t my-python-app . `-t my-python-app` − The -t flag tags the Docker image with the name `my-python-app`. Step 4: Run the Docker Container Once you have successfully built the Docker image, you can run the Docker container for that image using the Docker run command. docker run -d -p 5000:5000 my-python-app `-d` − This flag detaches the container and helps you to run it in the background. `-p 5000:5000` − The -p flag maps port 5000 on the host machine to port 5000 inside the Docker container. You can adjust the port numbers as per your requirements. `my-python-app` − Here, you have to specify the name of the Docker image to be used for creating the container. Step 5: Access the Python Application If your Python application is running on a web server, you open a web browser and navigate to `http://localhost:5000` to access the web application. How to run Python using Docker Compose? Next, let’s understand how to run Python using Docker Compose. Docker Compose helps you to simplify multi-container Docker application management using a single YAML file. It lets you orchestrate services and streamline development workflows, ensuring consistency across environments. Step 1: Create Docker Compose Configuration Start by creating a docker-compose.yml in the project directory. In this file, you have to mention the services and their configurations. version: ”3” services: web: build: . ports: – “5000:5000″ `version: ”3”` − Specifies the version of the Docker Compose file format. `services` − Defines the services to be run by Docker Compose. `web` − Name of the service. `build: .` − Specifies the build context for the service, indicating that the Dockerfile is located in the current directory. `ports` − Maps ports between the host and the container. Step 2: Create a Dockerfile Next, create a Dockerfile in the project directory containing the instructions to build the Docker image. FROM python:3.9 WORKDIR /app COPY requirements.txt /app/ RUN pip install -r requirements.txt COPY . /app CMD [“python”, “app.py”] Step 3: Define Python Dependencies Mention your external dependencies in the requirements.txt file. Flask==2.1.0 requests==2.26.0 Step 4: Build and Run with Docker Compose The next step is to build and run using Docker Compose. Navigate to the directory containing the containing the `docker-compose.yml` file. Execute the following command to build and run the services defined in the Compose file − docker-compose up -d `-d` − It allows you to detach the containers and run them in the background. Step 5: Access the Python Application You can access your Python application web server by opening a web browser and navigating to `http://localhost:5000`. Step 6: Stopping the Services If you want to stop the services defined in the `docker-compose.yml` file, you can run the following command − docker-compose down This command will help you to stop and remove the containers, their networks, and volumes associated with the services. How to run Python in a virtual environment within the Docker? Next, if you want to run Python in a virtual environment within Docker, you can follow the below steps. Virtual environments
Docker – Setting Node.js
Docker – Setting Node.js ”; Previous Next Node.js is a JavaScript framework that is used for developing server-side applications. It is an open source framework that is developed to run on a variety of operating systems. Since Node.js is a popular framework for development, Docker has also ensured it has support for Node.js applications. We will now see the various steps for getting the Docker container for Node.js up and running. Step 1 − The first step is to pull the image from Docker Hub. When you log into Docker Hub, you will be able to search and see the image for Node.js as shown below. Just type in Node in the search box and click on the node (official) link which comes up in the search results. Step 2 − You will see that the Docker pull command for node in the details of the repository in Docker Hub. Step 3 − On the Docker Host, use the Docker pull command as shown above to download the latest node image from Docker Hub. Once the pull is complete, we can then proceed with the next step. Step 4 − On the Docker Host, let’s use the vim editor and create one Node.js example file. In this file, we will add a simple command to display “HelloWorld” to the command prompt. In the Node.js file, let’s add the following statement − Console.log(‘Hello World’); This will output the “Hello World” phrase when we run it through Node.js. Ensure that you save the file and then proceed to the next step. Step 5 − To run our Node.js script using the Node Docker container, we need to execute the following statement − sudo docker run –it –rm –name = HelloWorld –v “$PWD”:/usr/src/app –w /usr/src/app node node HelloWorld.js The following points need to be noted about the above command − The –rm option is used to remove the container after it is run. We are giving a name to the container called “HelloWorld”. We are mentioning to map the volume in the container which is /usr/src/app to our current present working directory. This is done so that the node container will pick up our HelloWorld.js script which is present in our working directory on the Docker Host. The –w option is used to specify the working directory used by Node.js. The first node option is used to specify to run the node image. The second node option is used to mention to run the node command in the node container. And finally we mention the name of our script. We will then get the following output. And from the output, we can clearly see that the Node container ran as a container and executed the HelloWorld.js script. Print Page Previous Next Advertisements ”;
Docker – Managing Ports
Docker – Managing Ports ”; Previous Next By design, Docker containers are isolated, keeping internal ports to themselves and not allowing response to the external. This configuration of the ports on the Docker host can be done while creating a container using the -p or –publish Docker flag; then, it allows the port to be published. This mapping makes applications running within the containers accessible, as they receive traffic from outside sources. Multiple port mappings can be enabled for one container, which caters to scenarios where various services are running within the same container. Additionally, Docker Compose abstracts the port-mapping complexity for multi-container applications. With a docker-compose.yml file defining all services and their port mappings, it becomes easier for Docker Compose to create and wire containers. This is done so that it automatically assigns unique ports in a way that does not cause conflicts, making communication between containers in an application stress-free. The ability to escape from conflicts and make communication seamless enables it to control ports effectively, hence being a very resourceful tool for enhancing the workflow in development to deployment in complex applications. It is a precious tool for managing containerized environments. In this chapter, let’s learn about managing Docker ports in detail. EXPOSE vs. PUBLISH: Understanding the Differences Both EXPOSE and PUBLISH (or -p) deal with ports in Docker, but they are two different things − EXPOSE EXPOSE acts as documentation regarding which ports a containerized application intends to use for communication. It is a directive in a Dockerfile that lets anyone building or running the container know what services it can potentially offer. But remember that EXPOSE alone does not make those container ports accessible outside the container; the directive itself more or less acts like a note for the developers or system administrators. PUBLISH This is the actual port mapping. When you PUBLISH a port, that is, when you include the -p in the docker run or the ports section in the docker-compose.yml, you are making an association between some PORT in the Docker container and the PORT on the Docker host. And that is what enables external traffic to access an application running inside a container, i.e., where the “intention” that you EXPOSE is made real. How to Expose a Port in Docker using PUBLISH? Docker proposes several ways by which it can be done, but the most straightforward and widely known is by using the -p when running a container. Below is an example − Basic Syntax The basic syntax for exposing a port when running a Docker container is − $ docker run -p <host_port>:<container_port> <image_name> <host_port> − This indicates the port number on the Docker host where you want to expose the application. <container_port> − The port number in the container on which your application listens for traffic. <image_name> − The name of the Docker image you want to run. Example: Public Web Server Port For example, you have an application configured to run a web server on port 80 in the container. You can map this to a local machine port, 8080, by doing − $ docker run -p 8080:80 <your_web_server_image> Now, you can open http://localhost:8080 with your favorite web browser and see your application being served! Publish Multiple Ports in Docker If your application requires multiple ports to be open, you can just add the -p flag more than once. $ docker run -p 8080:80 -p 4433:443 <your_app_image> This exposes port 80 (for HTTP) and port 443 (for HTTPS) on your host machine to the service. Publish Ports Using Docker Compose It”s pretty simple to maintain port mappings with Docker Compose for multi-container applications. You do this inside your docker-compose.yml file, under each service, in the ports section − services: web: image: <your_web_server_image> ports: – “8080:80″ db: image: <your_database_image> # … other configurations Key Considerations Port Conflicts − Ensure that a host port selected by yourself is not already being used by any other application or service within your system. Firewall − If Docker runs on a remote server, you may want to configure your firewall to enable traffic across the exposed ports. Security − Docker vulnerabilities are easily exposed – your port will be exposed, and attackers can breach the container. Consider using reverse proxies or other security measures to protect your containers. How to Expose a Port in Dockerfile? While the `EXPOSE` instruction in a Dockerfile does not publish the port, it provides information about the port that the container is expected to listen on at runtime. In reality, though, it documents the ports that”ll be used by your Docker image so that users know which ports they can consider publishing in the container. Here”s how to define it in your Dockerfile − The EXPOSE Instruction The syntax is simple − EXPOSE <port> [<port>/<protocol>] `<port>` − Port that you wish to expose. `<protocol>` − optional, with a default of TCP. May be TCP or UDP. Example: Exposing a Web Server Port In a Dockerfile for a web server image, you would have − # … other Dockerfile instructions EXPOSE 80 This informs anyone looking at your image that very probably the inside application is listening to an incoming connection on port 80, the standard HTTP port. Opening up multiple ports and protocols You can have more than one `EXPOSE` in your Dockerfile − EXPOSE 80 EXPOSE 443/tcp EXPOSE 443/udp This would mean your application uses TCP port 80 by default and TCP/UDP port 443. Key Points to Note `EXPOSE` is not necessary; however, it is good practice to document your container”s network usage. It doesn”t publish
Docker – Containers & Shells
Docker – Containers and Shells ”; Previous Next By default, when you launch a container, you will also use a shell command while launching the container as shown below. This is what we have seen in the earlier chapters when we were working with containers. In the above screenshot, you can observe that we have issued the following command − sudo docker run –it centos /bin/bash We used this command to create a new container and then used the Ctrl+P+Q command to exit out of the container. It ensures that the container still exists even after we exit from the container. We can verify that the container still exists with the Docker ps command. If we had to exit out of the container directly, then the container itself would be destroyed. Now there is an easier way to attach to containers and exit them cleanly without the need of destroying them. One way of achieving this is by using the nsenter command. Before we run the nsenter command, you need to first install the nsenter image. It can be done by using the following command − docker run –rm -v /usr/local/bin:/target jpetazzo/nsenter Before we use the nsenter command, we need to get the Process ID of the container, because this is required by the nsenter command. We can get the Process ID via the Docker inspect command and filtering it via the Pid. As seen in the above screenshot, we have first used the docker ps command to see the running containers. We can see that there is one running container with the ID of ef42a4c5e663. We then use the Docker inspect command to inspect the configuration of this container and then use the grep command to just filter the Process ID. And from the output, we can see that the Process ID is 2978. Now that we have the process ID, we can proceed forward and use the nsenter command to attach to the Docker container. nsenter This method allows one to attach to a container without exiting the container. Syntax nsenter –m –u –n –p –i –t containerID command Options -u is used to mention the Uts namespace -m is used to mention the mount namespace -n is used to mention the network namespace -p is used to mention the process namespace -i s to make the container run in interactive mode. -t is used to connect the I/O streams of the container to the host OS. containerID − This is the ID of the container. Command − This is the command to run within the container. Return Value None Example sudo nsenter –m –u –n –p –i –t 2978 /bin/bash Output From the output, we can observe the following points − The prompt changes to the bash shell directly when we issue the nsenter command. We then issue the exit command. Now normally if you did not use the nsenter command, the container would be destroyed. But you would notice that when we run the nsenter command, the container is still up and running. Print Page Previous Next Advertisements ”;
Docker – Container & Hosts
Docker – Container and Hosts ”; Previous Next The good thing about the Docker engine is that it is designed to work on various operating systems. We have already seen the installation on Windows and seen all the Docker commands on Linux systems. Now let’s see the various Docker commands on the Windows OS. Docker Images Let’s run the Docker images command on the Windows host. From here, we can see that we have two images − ubuntu and hello-world. Running a Container Now let’s run a container in the Windows Docker host. We can see that by running the container, we can now run the Ubuntu container on a Windows host. Listing All Containers Let’s list all the containers on the Windows host. Stopping a Container Let’s now stop a running container on the Windows host. So you can see that the Docker engine is pretty consistent when it comes to different Docker hosts and it works on Windows in the same way it works on Linux. Print Page Previous Next Advertisements ”;
Docker – Container Linking
Docker – Container Linking ”; Previous Next Container Linking allows multiple containers to link with each other. It is a better option than exposing ports. Let’s go step by step and learn how it works. Step 1 − Download the Jenkins image, if it is not already present, using the Jenkins pull command. Step 2 − Once the image is available, run the container, but this time, you can specify a name to the container by using the –-name option. This will be our source container. Step 3 − Next, it is time to launch the destination container, but this time, we will link it with our source container. For our destination container, we will use the standard Ubuntu image. When you do a docker ps, you will see both the containers running. Step 4 − Now, attach to the receiving container. Then run the env command. You will notice new variables for linking with the source container. Print Page Previous Next Advertisements ”;
Docker Setting – Java
How to Run Java in a Docker Container? ”; Previous Next Docker allows you to set up Java environments that you can use in production and development servers alike. It enhances the efficiency and the manageability of executing Java programs. Irrespective of the underlying configurations, environment, and dependencies, Docker allows you to run Java programs reliably, and consistently across all the platforms. It greatly simplifies the deployment procedure as well and the problem of “it works only on my machine” is resolved. You can run Java in Docker containers using the two main approaches discussed below − You can use the official Java Docker base images provided by Oracle or AdoptOpenJDK. You can create your own Docker images with custom dependencies tailored specifically for Java applications by using Dockerfiles. In this chapter, we will explain both of these approaches to create and run Java applications inside the Docker container with the help of step-by-step instructions, commands, and examples. Benefits of Using Docker Containers to Run Java Applications There are several benefits associated with running Java applications inside Docker containers. They enhance the development and deployment workflows and improve scalability and reliability. Here are some of the key advantages of using Docker containers for Java applications. Isolation − Ensures independent operation. Consistency − Maintains uniform runtime environments. Portability − Facilitates easy migration between environments. Resource Efficiency − Maximizes resource utilization. Scalability − Allows seamless adjustment to workload demands. How to Run Java in Docker Using Java Base Images? One of the easiest ways to run Java in Docker is by using the existing Java base images provided by trusted organizations like Oracle or AdoptOpenJDK. To do so, here are the steps and commands. Step 1: Pull the Java Base Image You can start by pulling the Java base image from Docker Hub using the Docker pull command. For example, if you want to pull the OpenJDK 11 image from AdoptOpenJDK, you can use the following command. docker pull adoptopenjdk/openjdk11 Step 2: Run the Docker Container Now that you have the base image pulled to your local, you can run the Docker container using the pulled image. You can specify the Java application JAR that you want to run inside the container and copy it to the container. To do so, you can use the following command. docker run -d –name my-java-container -v /path/to/your/jar:/usr/src/app/my-java-app.jar adoptopenjdk/openjdk11 In this command − -d − This flag helps you to detach the container and you can run it in the background. –name my-java-container − You can assign a name to the running container for your reference using this flag. -v /path/to/your/jar:/usr/src/app/my-java-app.jar − This flag helps you to mount the directory that contains your Java application JAR into the container at /usr/src/app/my-java-app.jar. adoptopenjdk/openjdk11 − This is the name of the base image that you want to pull. Step 3: Access the Container”s Bash If you want to check whether Java is installed in the container that you created using the Docker image, you can access the bash shell of the container. To do so, you can use the following command. docker exec -it my-java-container /bin/bash In this command − docker exec − It lets you execute a command inside a running container. -it − It allocates a pseudo-TTY and helps keep the stdin open. This allows you to interact with the container”s bash shell. my-java-container − This is the name of the running container. /bin/bash − This specifies the command that you want to execute inside the container. This command opens a bash shell inside the container. Step 4: Check Java Installation Now that you have access to the bash shell of the container, you can check if Java is installed by running the below command. java -version This command is used to display the version of Java and JDK that has been installed in the container. On running this command, if you see the information related to the Java version, it means Java is installed properly in the container. Now that you have verified the successful installation of Java inside the container, you can exit the bash of the container by typing “exit” and pressing the enter key. How to Use Dockerfile to Create Custom Java Images? You can define your specific environment and run configurations for your Java applications using a Dockerfile to create and build Docker images. Here are the steps that you can follow to create a custom Docker image using Dockerfile. Step 1: Create a Dockerfile First, create a Dockerfile in the directory of your Java application. In the Dockerfile, we will mention the instructions to build image layers. Here’s a Dockerfile for a Docker image that has Java pre-installed in it − # Use a base Java image FROM adoptopenjdk/openjdk11:latest # Set the working directory inside the container WORKDIR /usr/src/app # Copy the Java application JAR file into the container COPY target/my-java-app.jar . # Expose the port on which your Java application runs (if applicable) EXPOSE 8080 # Command to run the Java application CMD [“java”, “-jar”, “my-java-app.jar”] In this Dockerfile − FROM − You can use FROM to specify the base image to be used. In this case, we have used OpenJDK 11 from AdoptOpenJDK as the base image. WORKDIR − This directive helps you to set the default working directory inside the Docker container where all the subsequent commands will be run. COPY − It helps you to copy the Java application JAR file from your local directory into the container. EXPOSE − This directive helps you to expose a particular port of the Docker container.
Docker – Cloud
Docker – Cloud ”; Previous Next The Docker Cloud is a service provided by Docker in which you can carry out the following operations − Nodes − You can connect the Docker Cloud to your existing cloud providers such as Azure and AWS to spin up containers on these environments. Cloud Repository − Provides a place where you can store your own repositories. Continuous Integration − Connect with Github and build a continuous integration pipeline. Application Deployment − Deploy and scale infrastructure and containers. Continuous Deployment − Can automate deployments. Getting started You can go to the following link to getting started with Docker Cloud − https://cloud.docker.com/ Once logged in, you will be provided with the following basic interface − Connecting to the Cloud Provider The first step is to connect to an existing cloud provider. The following steps will show you how to connect with an Amazon Cloud provider. Step 1 − The first step is to ensure that you have the right AWS keys. This can be taken from the aws console. Log into your aws account using the following link − https://aws.amazon.com/console/ Step 2 − Once logged in, go to the Security Credentials section. Make a note of the access keys which will be used from Docker Hub. Step 3 − Next, you need to create a policy in aws that will allow Docker to view EC2 instances. Go to the profiles section in aws. Click the Create Policy button. Step 4 − Click on ‘Create Your Own Policy’ and give the policy name as dockercloudpolicy and the policy definition as shown below. { “Version”: “2012-10-17”, “Statement”: [ { “Action”: [ “ec2:*”, “iam:ListInstanceProfiles” ], “Effect”: “Allow”, “Resource”: “*” } ] } Next, click the Create Policy button Step 5 − Next, you need to create a role which will be used by Docker to spin up nodes on AWS. For this, go to the Roles section in AWS and click the Create New Role option. Step 6 − Give the name for the role as dockercloud-role. Step 7 − On the next screen, go to ‘Role for Cross Account Access’ and select “Provide access between your account and a 3rd party AWS account”. Step 8 − On the next screen, enter the following details − In the Account ID field, enter the ID for the Docker Cloud service: 689684103426. In the External ID field, enter your Docker Cloud username. Step 9 − Then, click the Next Step button and on the next screen, attach the policy which was created in the earlier step. Step 10 − Finally, on the last screen when the role is created, make sure to copy the arn role which is created. arn:aws:iam::085363624145:role/dockercloud-role Step 11 − Now go back to Docker Cloud, select Cloud Providers, and click the plug symbol next to Amazon Web Services. Enter the arn role and click the Save button. Once saved, the integration with AWS would be complete. Setting Up Nodes Once the integration with AWS is complete, the next step is to setup a node. Go to the Nodes section in Docker Cloud. Note that the setting up of nodes will automatically setup a node cluster first. Step 1 − Go to the Nodes section in Docker Cloud. Step 2 − Next, you can give the details of the nodes which will be setup in AWS. You can then click the Launch Node cluster which will be present at the bottom of the screen. Once the node is deployed, you will get the notification in the Node Cluster screen. Deploying a Service The next step after deploying a node is to deploy a service. To do this, we need to perform the following steps. Step 1 − Go to the Services Section in Docker Cloud. Click the Create button Step 2 − Choose the Service which is required. In our case, let’s choose mongo. Step 3 − On the next screen, choose the Create & Deploy option. This will start deploying the Mongo container on your node cluster. Once deployed, you will be able to see the container in a running state. Print Page Previous Next Advertisements ”;