Consul Tutorial PDF Version Quick Guide Resources Job Search Discussion Consul is an important service discovery tool in the world of Devops. This tutorial covers in-depth working knowledge of Consul, its setup and deployment. This tutorial aims to help new user’s setup consul, develop advanced knowledge on consul and learn some interesting projects built around consul. In the end, I hope the readers understand this tutorial and use consul for their daily work. This tutorial will give you a quick start with Consul and make you comfortable with its various components. Audience This tutorial is prepared for the students, beginners as well as for intermediate Devops Practitioners to help them understand the basics to advanced concepts related to the Consul tool. Prerequisites Before you start doing practice with the examples given in this tutorial, it is being assumed that you already have a basic knowledge of Linux, Git, Golang, Docker and AWS (Amazon Web Services). Print Page Previous Next Advertisements ”;
Category: consul
Consul – Working with Microservices ”; Previous Next In this chapter, we will understand how Microservices work with Consul. We will also learn how the following components affect Consul. Using docker Building Registrator for Service Discovery Using rkt and Nomad Let us now discuss each of these in detail. Using Docker Before starting, please do not use this setup in production as it is used for demo purposes only. Docker is a container based service using which we can easily deploy our applications. For using Consul, we are going to use the image at the following link –0 https://hub.docker.com/r/progrium/consul/. It is being assumed that your system has Docker installed and properly configured. Let us try pulling down the image from the Docker hub, by running the following command − $ docker pull progrium/consul The output would be as shown in the following screenshot. We are going to publish some interfaces with their ports (using -p option on Docker) in the following manner. 8400 (RPC) 8500 (HTTP) 8600 (DNS) Also as per the pull made, we are going to set the name of the hostname as node1.You can change it to anything you want by using the -h flag with some hostname of your own as shown below. $ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap The output would be as shown in the following screenshot. You can also enable the UI mode for the Consul using − $ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap -ui-dir /ui You can check the UI based output on http://localhost:8500. The following screenshot gives you a better idea regarding the UI based output. For using consul over various docker containers on different nodes, we can run the following commands on different nodes − On Node1 $ docker run -d –name node1 -h node1 progrium/consul -server -bootstrap-expect 3 Where, -bootstrap-expect 3 means that the consul server will wait until there are 3 peers connected before self-bootstrapping and becoming a working cluster. Before going any further, we need to get the container”s internal IP by inspecting the container. For our use, case purpose, we are going to declare the $ JOIN_IP. $ JOIN_IP = “$(docker inspect -f ”{{.NetworkSettings.IPAddress}}” node1)” On Node2 So, let us start Node2 and tell it to join Node1 using the variable declared in the program given above. $docker run -d –name node2 -h node2 progrium/consul -server -join $JOIN_IP On Node3 $ docker run -d –name node3 -h node3 progrium/consul -server -join $JOIN_IP Building Registrator for Service Discovery Registrator automatically registers and deregisters services for any Docker container by inspecting containers as they come online. The Registrator we are about to use currently supports pluggable service registries, which currently includes Consul, Etcd and SkyDNS2. The usage of Registrator is highly recommended when we are interacting with different services over the network. $ docker pull gliderlabs/registrator:latest The output would be as shown in the following screenshot. $ docker run -d –name = registrator –net = host –volume = /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://localhost:8500 The output would be as shown in the following screenshot. The output you have received is the ID of the Docker Container that you have just started. You can check whether the container is running or not by using the command − $ docker ps -a he output would be as shown in the following screenshot. You can also view the logs of Registrator by using the following command. $ docker logs registrator Using rkt and Nomad The rkt is another container-based service, which you can use in your environment. It is built by CoreOS. The main reason for building rkt was to improve the security that was one of the crisis issues for Docker back when it was still in development in 2013-14. As for Consul, we can use the Rkt Registrator for working on service discovery with Consul. This particular Registrator project, which is covered for rkt is under development and is not recommended for production level use. You can check if rkt is installed or not, by going to its path and running the following command. $ ./rkt You can check the output to check, if it is correctly installed or not as shown in the following screenshot. For trying out rkt and Consul please check out − https://github.com/r3boot/rkt-registrator. Nomad Tool One of the most commonly used and a favorite option is the Nomad tool. Nomad is a tool for managing a cluster of machines and running applications on them. It is similar to Mesos or Kubernetes. By default, Nomad covers the Docker and rkt driver within itself. So, if you are looking for a large-scale deployment of containers with Consul. Nomad might be a good solution to it. Check out − https://www.nomadproject.io/docs/drivers/rkt.html for further information on Nomad. Print Page Previous Next Advertisements ”;
Consul – Architecture
Consul – Architecture ”; Previous Next The architecture diagram for consul working in one datacenter can be best described as shown below − As we can observe, there are three different servers, which are managed by Consul. The working architecture works by the using raft algorithm, which helps us in electing a leader out of the three different servers. These servers are then labelled according to the tags such as Follower and Leader. As the name suggests, the follower is responsible for following the decisions of the leader. All these three servers are further connected with each other for any communication. Each server interacts with its own client using the concept of RPC. The Communication between the Clients is possible due to Gossip Protocol as mentioned below. The Communication with the internet facility can be made available using TCP or gossip method of communication. This communication is in direct contact with any of the three servers. Raft Algorithm Raft is a consensus algorithm for managing a replicated log. It relies on the principle of CAP Theorem, which states that in the presence of a network partition, one has to choose between consistency and availability. Not all the three fundamentals of the CAP Theorem can be achieved at any given point of time. One has to tradeoff for any two of them at the best. A Raft Cluster contains several servers, usually in the odd number count. For example, if we have five servers, it will allow the system to tolerate two failures. At any given time, each server is in one of the three states: Leader, Follower, or Candidate. In a normal operation, there is exactly one leader and all of the other servers are followers. These followers are in a passive state, i.e. they issue no requests on their own, but simply respond to requests from leaders and the candidate. The following illustration describes the workflow model using which the raft algorithm works − Key Value Data Since the Consul”s version 0.7.1, there has been an introduction of separate key value data. The KV command is used to interact with the Consul”s key-value store via the command line. It exposes top-level commands for Inserting, Updating, Reading and Deleting from the store. To get the Key/Value object store, we call the KV method available for the consul client − kv := consul.KV() The KVPair Structure is used to represent a single key/value entry. We can view the structure of Consul KV Pair in the following program. type KVPair struct { Key string CreateIndex uint64 ModifyIndex uint64 LockIndex uint64 Flags uint64 Value []byte Session string } Here, the various structures mentioned in the above code can be defined as follows − Key − It is a slash URL name. For example – sites/1/domain. CreateIndex − Index number assigned when the key was first created. ModifyIndex − Index number assigned when the key was last updated. LockIndex − Index number created when a new lock acquired on the key/value entry Flags − It can be used by the app to set the custom value. Value − It is a byte array of maximum 512kb. Session − It can be set after creating a session object. Types of Protocol There are two types of protocol in Consul, which are called as − Consensus Protocol and Gossip Protocol Let us now understand them in detail. Consensus Protocol Consensus protocol is used by Consul to provide Consistency as described by the CAP Theorem. This protocol is based on the Raft Algorithm. When implementing Consensus protocol, the Raft Algorithm is used where raft nodes are always in any one of the three states: Follower, Candidate or Leader. Gossip Protocol The gossip protocol can be used to manage membership, send and receive messages across the cluster. In consul, the usage of gossip protocol occurs in two ways, WAN (Wireless Area Network) and LAN (Local Area Network). There are three known libraries, which can implement a Gossip Algorithm to discover nodes in a peer-to-peer network − teknek-gossip − It works with UDP and is written in Java. gossip-python − It utilizes the TCP stack and it is possible to share data via the constructed network as well. Smudge − It is written in Go and uses UDP to exchange status information. Gossip protocols have also been used for achieving and maintaining a distributed database consistency or with other types of data in consistent states, counting the number of nodes in a network of unknown size, spreading news robustly, organizing nodes, etc. Remote Procedure Calls The RPC can be denoted as the short form for Remote Procedure Calls. It is a protocol that one program uses to request a service from another program. This protocol can be located in another computer on a network without having to acknowledge the networking details. The real beauty of using RPC in Consul is that, it helps us avoid the latency issues which most of the discovery service tools did have some time ago. Before RPC, Consul used to have only TCP and UDP based connections, which were good with most systems, but not in the case of distributed systems. RPC solves such problems by reducing the time-period of transfer of packet information from one place to another. In this area, GRPC by Google is a great tool to look forward in case one wishes to observe benchmarks and compare performance. Print Page Previous Next Advertisements ”;