Docker – Setting Node.js

Docker – Setting Node.js ”; Previous Next Node.js is a JavaScript framework that is used for developing server-side applications. It is an open source framework that is developed to run on a variety of operating systems. Since Node.js is a popular framework for development, Docker has also ensured it has support for Node.js applications. We will now see the various steps for getting the Docker container for Node.js up and running. Step 1 − The first step is to pull the image from Docker Hub. When you log into Docker Hub, you will be able to search and see the image for Node.js as shown below. Just type in Node in the search box and click on the node (official) link which comes up in the search results. Step 2 − You will see that the Docker pull command for node in the details of the repository in Docker Hub. Step 3 − On the Docker Host, use the Docker pull command as shown above to download the latest node image from Docker Hub. Once the pull is complete, we can then proceed with the next step. Step 4 − On the Docker Host, let’s use the vim editor and create one Node.js example file. In this file, we will add a simple command to display “HelloWorld” to the command prompt. In the Node.js file, let’s add the following statement − Console.log(‘Hello World’); This will output the “Hello World” phrase when we run it through Node.js. Ensure that you save the file and then proceed to the next step. Step 5 − To run our Node.js script using the Node Docker container, we need to execute the following statement − sudo docker run –it –rm –name = HelloWorld –v “$PWD”:/usr/src/app –w /usr/src/app node node HelloWorld.js The following points need to be noted about the above command − The –rm option is used to remove the container after it is run. We are giving a name to the container called “HelloWorld”. We are mentioning to map the volume in the container which is /usr/src/app to our current present working directory. This is done so that the node container will pick up our HelloWorld.js script which is present in our working directory on the Docker Host. The –w option is used to specify the working directory used by Node.js. The first node option is used to specify to run the node image. The second node option is used to mention to run the node command in the node container. And finally we mention the name of our script. We will then get the following output. And from the output, we can clearly see that the Node container ran as a container and executed the HelloWorld.js script. Print Page Previous Next Advertisements ”;

Docker – Managing Ports

Docker – Managing Ports ”; Previous Next By design, Docker containers are isolated, keeping internal ports to themselves and not allowing response to the external. This configuration of the ports on the Docker host can be done while creating a container using the -p or –publish Docker flag; then, it allows the port to be published. This mapping makes applications running within the containers accessible, as they receive traffic from outside sources. Multiple port mappings can be enabled for one container, which caters to scenarios where various services are running within the same container. Additionally, Docker Compose abstracts the port-mapping complexity for multi-container applications. With a docker-compose.yml file defining all services and their port mappings, it becomes easier for Docker Compose to create and wire containers. This is done so that it automatically assigns unique ports in a way that does not cause conflicts, making communication between containers in an application stress-free. The ability to escape from conflicts and make communication seamless enables it to control ports effectively, hence being a very resourceful tool for enhancing the workflow in development to deployment in complex applications. It is a precious tool for managing containerized environments. In this chapter, let’s learn about managing Docker ports in detail. EXPOSE vs. PUBLISH: Understanding the Differences Both EXPOSE and PUBLISH (or -p) deal with ports in Docker, but they are two different things − EXPOSE EXPOSE acts as documentation regarding which ports a containerized application intends to use for communication. It is a directive in a Dockerfile that lets anyone building or running the container know what services it can potentially offer. But remember that EXPOSE alone does not make those container ports accessible outside the container; the directive itself more or less acts like a note for the developers or system administrators. PUBLISH This is the actual port mapping. When you PUBLISH a port, that is, when you include the -p in the docker run or the ports section in the docker-compose.yml, you are making an association between some PORT in the Docker container and the PORT on the Docker host. And that is what enables external traffic to access an application running inside a container, i.e., where the “intention” that you EXPOSE is made real. How to Expose a Port in Docker using PUBLISH? Docker proposes several ways by which it can be done, but the most straightforward and widely known is by using the -p when running a container. Below is an example − Basic Syntax The basic syntax for exposing a port when running a Docker container is − $ docker run -p <host_port>:<container_port> <image_name> <host_port> − This indicates the port number on the Docker host where you want to expose the application. <container_port> − The port number in the container on which your application listens for traffic. <image_name> − The name of the Docker image you want to run. Example: Public Web Server Port For example, you have an application configured to run a web server on port 80 in the container. You can map this to a local machine port, 8080, by doing − $ docker run -p 8080:80 <your_web_server_image> Now, you can open http://localhost:8080 with your favorite web browser and see your application being served! Publish Multiple Ports in Docker If your application requires multiple ports to be open, you can just add the -p flag more than once. $ docker run -p 8080:80 -p 4433:443 <your_app_image> This exposes port 80 (for HTTP) and port 443 (for HTTPS) on your host machine to the service. Publish Ports Using Docker Compose It”s pretty simple to maintain port mappings with Docker Compose for multi-container applications. You do this inside your docker-compose.yml file, under each service, in the ports section − services: web: image: <your_web_server_image> ports: – “8080:80″ db: image: <your_database_image> # … other configurations Key Considerations Port Conflicts − Ensure that a host port selected by yourself is not already being used by any other application or service within your system. Firewall − If Docker runs on a remote server, you may want to configure your firewall to enable traffic across the exposed ports. Security − Docker vulnerabilities are easily exposed – your port will be exposed, and attackers can breach the container. Consider using reverse proxies or other security measures to protect your containers. How to Expose a Port in Dockerfile? While the `EXPOSE` instruction in a Dockerfile does not publish the port, it provides information about the port that the container is expected to listen on at runtime. In reality, though, it documents the ports that”ll be used by your Docker image so that users know which ports they can consider publishing in the container. Here”s how to define it in your Dockerfile − The EXPOSE Instruction The syntax is simple − EXPOSE <port> [<port>/<protocol>] `<port>` − Port that you wish to expose. `<protocol>` − optional, with a default of TCP. May be TCP or UDP. Example: Exposing a Web Server Port In a Dockerfile for a web server image, you would have − # … other Dockerfile instructions EXPOSE 80 This informs anyone looking at your image that very probably the inside application is listening to an incoming connection on port 80, the standard HTTP port. Opening up multiple ports and protocols You can have more than one `EXPOSE` in your Dockerfile − EXPOSE 80 EXPOSE 443/tcp EXPOSE 443/udp This would mean your application uses TCP port 80 by default and TCP/UDP port 443. Key Points to Note `EXPOSE` is not necessary; however, it is good practice to document your container”s network usage. It doesn”t publish

Docker – Overview

Docker – Overview ”; Previous Next Currently, Docker accounts for over 32 percent market share of the containerization technologies market, and this number is only expected to grow. In general, any containerization software allows you to run without launching an entire virtual machine. Docker makes repetitive and time-consuming configuration tasks redundant. This allows for quick and efficient development of applications both on desktop and cloud environments. However, to get comfortable with Docker, it’s important to get a clear understanding of its underlying architecture and other underpinnings. In this chapter, let’s explore the overview of Docker and understand how various components of Docker work and interact with each other. What is Docker? Docker is an open-source platform for developing, delivering, and running applications. It makes it easier to detach applications from infrastructure, which guarantees quick software delivery. Docker shortens the time between code creation and deployment by coordinating infrastructure management with application processing. Applications are packaged and run inside what are known as containers which are loosely isolated environments in the Docker ecosystem. Because of this isolation, more containers can run concurrently on a single host, improving security. As they are lightweight, containers eliminate the need for host setups by encapsulating all requirements for application execution. Since containers are constant across shared environments, collaboration is smooth. Docker provides us with comprehensive tooling and a platform for managing the container lifecycle − You can develop applications and support their components using containers. You can use containers as the distribution and testing unit for all your applications. Docker allows you to deploy applications into all environments seamlessly and consistently, whether on local data centers, cloud platforms, or hybrid infrastructures. Why is Docker Used? Rapid Application Development and Delivery Docker speeds up application development cycles by providing standardized environments in the form of local containers. These containers are integral to CI/CD workflows and they ensure fast and consistent application delivery. Consider the following example scenario − The developers in your team write programs in their local system. They can share their work with their teammates using Docker containers. Then, they can use Docker to deploy their applications into a test environment where they can run automated or manual tests. If a bug is found, they can fix it in the development environment, verify the build, and redeploy it to the test environment for further testing. After the testing is done, deploying the application to the production environment and getting the feature to the customer is as simple as pushing the updated image to the production environment. Responsive Deployment and Scaling Since Docker is a container-based platform, it facilitates highly portable workloads. This allows you to run applications seamlessly across various environments. Its portability and lightweight nature allow for dynamic workload management. Subsequently, businesses can scale applications in real time as per demand. Maximizing Hardware Utilization Docker is a cost-effective alternative to traditional virtual machines. This enables higher server capacity utilization. It allows you to create high-density environments and perform smaller deployments. This allows businesses to achieve more with limited resources. Docker Containers vs Virtual Machines Virtual machines (VMs) and Docker containers are two widely used technologies in modern computing environments, although they have different uses and benefits. Making an informed choice on which technology to choose for a given use case requires an understanding of their differences. Architecture Docker Containers − Docker containers are lightweight and portable, and they share the host OS kernel. They run on top of the host OS and encapsulate the application and its dependencies. Virtual Machines − On the other hand, Virtual Machines imitate full-fledged hardware, including the guest OS, on top of a hypervisor. Each VM runs its own OS instance which is independent of the host OS. Resource Efficiency Docker Containers − In terms of resource utilization, Docker Containers are highly efficient since they share the host OS kernel and require fewer resources compared to VMs. Virtual Machines − VMs consume more resources since they need to imitate an entire operating system, including memory, disk space, and CPU. Isolation Docker Containers − Containers provide process-level isolation. This means that they share the same OS kernel but have separate filesystems and networking. This is achieved through namespaces and control groups. Virtual Machines − Comparatively, VMs offer stronger isolation since each VM runs its kernel and has its dedicated resources. Hence, VMs are more secure but also heavier. Portability Docker Containers − As long as Docker is installed in an environment, Containers can run consistently across different environments, development or production. This makes them highly portable. Virtual Machines − VMs are less flexible compared to containers due to differences in underlying hardware and hypervisor configurations. However, they can be portable to some extent through disk images. Startup Time Docker Containers − Containers spin up almost instantly since they utilize the host OS kernel. Hence, they are best suitable for microservices architectures and rapid scaling. Virtual Machines − VMs typically take longer to start because they need to boot an entire OS. This results in slower startup times compared to containers. Use Cases Docker Containers − Docker Containers are best suited for microservices architectures, CI/CD pipelines, and applications that require rapid deployment and scaling. Virtual Machines − VMs are preferred for running legacy applications that have strict security requirements where strong isolation is necessary. Docker Architecture Docker uses a client-server architecture. The Docker client communicates with the Docker daemon, which builds, manages, and distributes your Docker containers. The Docker daemon does all the heavy lifting. A Docker client can also be connected to a remote Docker daemon, or the daemon and client can operate on the same machine. They communicate over a REST API,