Consul – Installation ”; Previous Next For demonstration purposes, we are going to use consul agent in the developer mode using the -dev mode. Just for the local machine setup, we are going to do a single system consul setup. Please do not use this single node consul cluster in your production. As Hashicorp already mentions it in the case scenario of a single node consul cluster, the data loss is inevitable. Installing Consul Consul can be installed via the Downloads page at www.consul.io/downloads.html You can extract the binary package in your Downloads section of your machine. $ cd Downloads $ chmod +x consul $ sudo mv consul /usr/bin/ Now let us start using consul using the -dev flag. $ consul agent -dev -data-dir=/tmp/consul The output would be as shown in the following screenshot. Now you can check your consul members using the following command. $ consul members The output would be as shown in the following screenshot. If you want to join other nodes to this node − $ consul join <Node 2> <Node 3> Alternatively, you can run the following command on Node 2 & 3 − $ consul join <Node 1> Using the Command Line The command line of consul consists of several different options, some of the most commonly used ones are as follows − agent − which runs a Consul agent. configtest − to validate a config file. event − to start up a new event. exec − to execute a command on Consul nodes. force-leave − forcing a member of the cluster to leave the cluster. info − it provides us the debugging information for operators. join − to make a Consul agent join the cluster. keygen − to generate a new encryption key. keyring − to manage gossip layer encryption keys. kv − to interact with the key-value store. leave − to leave the Consul cluster and shut it down without force. lock − to execute a command to hold down a lock. maint − to control node or service maintenance mode. members − it lists the members of a Consul cluster. monitor − it streams logs from a Consul agent. operator − it provides us a cluster of tools for Consul operators. reload − it triggers the agent to reload configuration files. rtt − it estimates network round trip time between nodes. snapshot − it saves, restores and inspects snapshots of Consul server state. version − to print the current Consul version. watch − to Watch out for changes in the Consul. Consul Template The consul-template provides us a daemon that queries the Consul instance and updates any number of specified templates on the file system. The consul-template can optionally run arbitrary commands when the update process completes. This option helps us to setup the consul cluster without manually doing everything on our own. The consul template is to be formed at /tmp/<name-of-file>.conf.tmpfl. The language in which the template is written according to Hashicorp Configuration Language (HCL). You can download the consul-template from this page. Try it out by using the following command − $ ./consul-template -h The output would be as shown in the following screenshot. If you wish to move this binary to a more prominent space, so that it is available for the user every time. You can type in the following commands − $ chmod +x consul-template $ sudo mv consul-template /usr/share/bin/ For demo purposes, we are going to use a sample configuration of nginx to be used as our service. You can try out more demos at https://github.com/hashicorp/consul-template/tree/master/examples or better write down your own template. $ vim /tmp/nginx.conf.ctmpl The output would be as shown in the following screenshot. The config file may look like − {{range services}} {{$name := .Name}} {{$service := service .Name}} upstream {{$name}} { zone upstream-{{$name}} 64k; {{range $service}}server {{.Address}}:{{.Port}} max_fails = 3 fail_timeout = 60 weight = 1; {{else}}server 127.0.0.1:65535; # force a 502{{end}} } {{end}} server { listen 80 default_server; location / { root /usr/share/nginx/html/; index index.html; } location /stub_status { stub_status; } {{range services}} {{$name := .Name}} location /{{$name}} { proxy_pass http://{{$name}}; } {{end}} } Now using the consul template binary file, please run the following commands − $ consul-template -template = “/tmp/nginx.conf.ctmpl:/etc/nginx/conf.d/default.conf” With the previous command the process has started. You can later open up another terminal and view the nginx.conf file being completely rendered using the following command. $ cat /etc/nginx/conf.d/default.conf The output would be as shown in the following screenshot. Print Page Previous Next Advertisements ”;
Category: consul
Consul – Failover Events
Consul – Failover Events ”; Previous Next In this chapter, we will learn regarding the Failover Events in Consul. This will be done with the help of the following functionalities − Single Cluster Failure Jepsen Testing Multiple Cluster Failure Taking snapshots Let us understand each of these in detail. Single Cluster Failure In a single cluster failure, the cluster placed in one of the datacenter starts failing. In every case scenario, it is important to make sure that in case of a failover the system cannot only prevent it, but also have a backup that it can rely on. For preventing Consul Failover events, we are going to use something called as Consul-alerts. The main project can be found at – https://github.com/AcalephStorage/consul-alerts. Consul-alerts is a highly available daemon for sending notifications and reminders based on Consul Health checks. This project runs a daemon and API at localhost:9000 and connects to the local consul agent (localhost:8500) with the default datacenter (dc1). There are two methods to get started with the project. The first method is to install it via GO. For users, who have GO installed and configured, they can follow the steps given below − $ go get github.com/AcalephStorage/consul-alerts $ go install $ consul-alerts start The last command can be easily used to override the default ports for consul-alert, datacenter option, consul-acl token, etc. The command can also be written as given below − $ consul-alerts start –alert-addr = localhost:9000 –consul-addr = localhost:8500 –consul-dc = dc1 –consul-acl-token = “” The second method involves the user to use Docker. Both the methods are equally useful in different scenarios. For using Consul-alerts over Docker, let us pull the image from the Docker Hub by using the following command. $ docker pull acaleph/consul-alerts Into the Docker method, we can consider the following three options − Using Consul Agent that is built in the container itself. Using the Consul Agent running over another Docker Container. Using the Consul-alerts to link over a Remote Consul Instance. Let us now discuss both of these in detail. Using Consul Agent that is built in the container itself Let us start the consul agent using the following command − $ docker run -ti –rm -p 9000:9000 –hostname consul-alerts –name consul-alerts –entrypoint = /bin/consul acaleph/consul-alerts agent -data-dir /data -server -bootstrap -client = 0.0.0.0 Here, we are overriding the entrypoint for Consul as mentioned by the flag –entrypoint. Along with it, we are bootstrapping the client by mentioning the port used by using -p flag, data directory /data using the flag -data-dir and client as 0.0.0.0. On a new terminal window, let us start the consul-alerts option. $ docker exec -ti consul-alerts /bin/consul-alerts start –alertaddr = 0.0.0.0:9000 –log-level = info –watch-events –watch-checks Here, in the above steps, we are executing the consul-alerts to start in the interactive mode. The alert address port is mentioned as 9000. The watch checks whether the consul agents are enabled or not along with the consul checks. We can clearly see that the consul alerts have easily started and it has registered a new health check with addition of the consul agent. The datacenter is taken as dc1, which can be changed according to the user. Using the Consul Agent running over another Docker Container Here, you can use any type of a consul image to be run over the Docker Container. Using the consul-alerts image, we can easily link the consul container with the consul-alerts container. This is done using the –link flag. Note − Before using the following command, please make sure that the consul container is already running on another terminal. $ docker run -ti -p 9000:9000 –hostname consul-alerts –name consul-alerts –link consul:consul acaleph/consul-alerts start –consul-addr=consul:8500 –log-level = info –watch-events –watch-checks Using the Consul-alerts to link over a Remote Consul Instance Here, we should use the following command to use the Consul-alerts to link over a remote consul instance. $ docker run -ti -p 9000:9000 –hostname consul-alerts –name consul-alerts acaleph/consul-alerts start –consul-addr = remote-consul-server.domain.tdl:8500 –log-level = info –watch-events –watch-checks Jepsen Testing Jespen is a tool written to test the partial tolerance and networking in any system. It tests the system by creating some random operations on the system. Jepsen is written in Clojure. Unfortunately, for demo, Jepsen testing requires a huge level of cluster formation with database systems and hence is out of scope to be covered here. Jepsen works by setting up the data store under test on five different hosts. It creates a client, for the data store under test, pointing each of the five nodes to send requests. It also creates a special series of client(s) called as “Nemesis”, which wreak havoc in the cluster like, cutting links between nodes using iptables. Then it proceeds to make requests concurrently against different nodes while alternately partitioning and healing the network. At the end of the test run, it heals the cluster, waits for the cluster to recover and then verifies whether the intermediate and final state of the system is as expected. For more info on Jepsen Testing, check it here. Multiple Cluster Failure During a Multiple Cluster Failover event, the clusters deployed in multiple datacenter fail to support the services supported to the customer. Consul enables us to ensure that when one of such condition occurs, Consul has features that help you to enable services in such type of conditions. For this to happen, we will look through a project that helps us to enable replicating Consul from One Cluster to Multiple Clusters. The project provides us a way to replicate K/V pairs across multiple Consul Data centers using the consul-replicate daemon. You can view this Hashicorp project on − https://github.com/hashicorp/consul-replicate. Some of the prerequisites for trying out this project include − Golang Docker Consul Git Let us get started with the following commands − Note − Before running the following command, please make sure you have Git properly installed and configured on your machine. $ git clone − https://github.com/hashicorp/consul-replicate.git The output would be as
Consul – Quick Guide
Consul – Quick Guide ”; Previous Next Consul – Introduction Consul is a Hashicorp based tool for discovering and configuring a variety of different services in your infrastructure. It is based and built on Golang. One of the core reasons to build Consul was to maintain the services present in the distributed systems. Some of the significant features that Consul provides are as follows. Service Discovery − Using either DNS or HTTP, applications can easily find the services they depend upon. Health Check Status − It can provide any number of health checks. It is used by the service discovery components to route traffic away from unhealthy hosts. Key/Value Store − It can make use of Consul”s hierarchical key/value store for any number of purposes, including dynamic configuration, feature flagging, coordination, leader election, etc. Multi Datacenter Deployment − Consul supports multiple datacenters. It is used for building additional layers of abstraction to grow to multiple regions. Web UI − Consul provides its users a beautiful web interface using which it can be easy to use and manage all of the features in consul. Service Discovery Service discovery is one of the most important feature of Consul. It is defined as the detection of different services and network protocols using which a service is found. The usage of service discovery comes in as a boon for distributed systems. This is one of the main problems, which are faced by today”s large-scale industries with the advancement of distributed systems in their environment. Comparison with Etcd and Zookeeper When we look at other service discovery tools in this domain, we have two popular options. Some major players in the software industry have been using it in the past. These tools are Etcd and Zookeeper. Let us consider the following table for comparing different aspects of each tool. We will also understand what each one of them uses internally. Properties Consul Etcd Zoo Keeper User Interface Available RPC Available Available Health Check HTTP API HTTP API TCP Key Value 3 Consistency modes Good Consistency Strong Consistency Token System Available Language Golang Golang Java Consul – Members and Agents Consul members can be defined as the list of different agents and server modes using which a consul cluster is deployed. Consul provides us with a command line feature using which we can easily list all the agents associated with consul. Consul agent is the core process of Consul. The agent maintains membership information, registers services, runs checks, responds to queries, etc. Any agent can be run in one of two modes: Client or Server. These two modes can be used according to their role as decided when using consul. The consul agent helps by providing us information, which is listed below. Node name − This is the hostname of the machine. Datacenter − The datacenter in which the agent is configured to run. Each node must be configured to report to its datacenter. Server − It indicates whether the agent is running in server or client mode. Server nodes participates in the consensus quorum, storing cluster state and handling queries. Client Addr − It is the address used for client interfaces by the agent. It includes the ports for the HTTP, DNS, and RPC interfaces. Cluster Addr − It is the address and the set of ports used for communication between Consul Agents in a cluster. This address must be reachable by all other nodes. In the next chapter, we will understand the architecture for Consul. Consul – Architecture The architecture diagram for consul working in one datacenter can be best described as shown below − As we can observe, there are three different servers, which are managed by Consul. The working architecture works by the using raft algorithm, which helps us in electing a leader out of the three different servers. These servers are then labelled according to the tags such as Follower and Leader. As the name suggests, the follower is responsible for following the decisions of the leader. All these three servers are further connected with each other for any communication. Each server interacts with its own client using the concept of RPC. The Communication between the Clients is possible due to Gossip Protocol as mentioned below. The Communication with the internet facility can be made available using TCP or gossip method of communication. This communication is in direct contact with any of the three servers. Raft Algorithm Raft is a consensus algorithm for managing a replicated log. It relies on the principle of CAP Theorem, which states that in the presence of a network partition, one has to choose between consistency and availability. Not all the three fundamentals of the CAP Theorem can be achieved at any given point of time. One has to tradeoff for any two of them at the best. A Raft Cluster contains several servers, usually in the odd number count. For example, if we have five servers, it will allow the system to tolerate two failures. At any given time, each server is in one of the three states: Leader, Follower, or Candidate. In a normal operation, there is exactly one leader and all of the other servers are followers. These followers are in a passive state, i.e. they issue no requests on their own, but simply respond to requests from leaders and the candidate. The following illustration describes the workflow model using which the raft algorithm works − Key Value Data Since the Consul”s version 0.7.1, there has been an introduction of separate key value data. The KV command is used to interact with the Consul”s key-value store via the command line. It exposes top-level commands for Inserting, Updating, Reading and Deleting from the store. To get the Key/Value object store, we call the KV method available for the consul client − kv := consul.KV() The KVPair Structure is used to represent a single key/value entry. We can view the structure of Consul KV Pair in the following program. type KVPair