Elasticsearch – Modules Elasticsearch is composed of a number of modules, which are responsible for its functionality. These modules have two types of settings as follows − Static Settings − These settings need to be configured in config (elasticsearch.yml) file before starting Elasticsearch. You need to update all the concern nodes in the cluster to reflect the changes by these settings. Dynamic Settings − These settings can be set on live Elasticsearch. We will discuss the different modules of Elasticsearch in the following sections of this chapter. Cluster-Level Routing and Shard Allocation Cluster level settings decide the allocation of shards to different nodes and reallocation of shards to rebalance cluster. These are the following settings to control shard allocation. Cluster-Level Shard Allocation Setting Possible value Description cluster.routing.allocation.enable all This default value allows shard allocation for all kinds of shards. primaries This allows shard allocation only for primary shards. new_primaries This allows shard allocation only for primary shards for new indices. none This does not allow any shard allocations. cluster.routing.allocation .node_concurrent_recoveries Numeric value (by default 2) This restricts the number of concurrent shard recovery. cluster.routing.allocation .node_initial_primaries_recoveries Numeric value (by default 4) This restricts the number of parallel initial primary recoveries. cluster.routing.allocation .same_shard.host Boolean value (by default false) This restricts the allocation of more than one replica of the same shard in the same physical node. indices.recovery.concurrent _streams Numeric value (by default 3) This controls the number of open network streams per node at the time of shard recovery from peer shards. indices.recovery.concurrent _small_file_streams Numeric value (by default 2) This controls the number of open streams per node for small files having size less than 5mb at the time of shard recovery. cluster.routing.rebalance.enable all This default value allows balancing for all kinds of shards. primaries This allows shard balancing only for primary shards. replicas This allows shard balancing only for replica shards. none This does not allow any kind of shard balancing. cluster.routing.allocation .allow_rebalance always This default value always allows rebalancing. indices_primaries _active This allows rebalancing when all primary shards in cluster are allocated. Indices_all_active This allows rebalancing when all the primary and replica shards are allocated. cluster.routing.allocation.cluster _concurrent_rebalance Numeric value (by default 2) This restricts the number of concurrent shard balancing in cluster. cluster.routing.allocation .balance.shard Float value (by default 0.45f) This defines the weight factor for shards allocated on every node. cluster.routing.allocation .balance.index Float value (by default 0.55f) This defines the ratio of the number of shards per index allocated on a specific node. cluster.routing.allocation .balance.threshold Non negative float value (by default 1.0f) This is the minimum optimization value of operations that should be performed. Disk-based Shard Allocation Setting Possible value Description cluster.routing.allocation.disk.threshold_enabled Boolean value (by default true) This enables and disables disk allocation decider. cluster.routing.allocation.disk.watermark.low String value(by default 85%) This denotes maximum usage of disk; after this point, no other shard can be allocated to that disk. cluster.routing.allocation.disk.watermark.high String value (by default 90%) This denotes the maximum usage at the time of allocation; if this point is reached at the time of allocation, then Elasticsearch will allocate that shard to another disk. cluster.info.update.interval String value (by default 30s) This is the interval between disk usages checkups. cluster.routing.allocation.disk.include_relocations Boolean value (by default true) This decides whether to consider the shards currently being allocated, while calculating disk usage. Discovery This module helps a cluster to discover and maintain the state of all the nodes in it. The state of cluster changes when a node is added or deleted from it. The cluster name setting is used to create logical difference between different clusters. There are some modules which help you to use the APIs provided by cloud vendors and those are as given below − Azure discovery EC2 discovery Google compute engine discovery Zen discovery Gateway This module maintains the cluster state and the shard data across full cluster restarts. The following are the static settings of this module − Setting Possible value Description gateway.expected_nodes numeric value (by default 0) The number of nodes that are expected to be in the cluster for the recovery of local shards. gateway.expected_master_nodes numeric value (by default 0) The number of master nodes that are expected to be in the cluster before start recovery. gateway.expected_data_nodes numeric value (by default 0) The number of data nodes expected in the cluster before start recovery. gateway.recover_after_time String value (by default 5m) This is the interval between disk usages checkups. cluster.routing.allocation. disk.include_relocations Boolean value (by default true) This specifies the time for which the recovery process will wait to start regardless of the number of nodes joined in the cluster. gateway.recover_ after_nodes gateway.recover_after_master_nodes gateway.recover_after_data_nodes HTTP This module manages the communication between HTTP client and Elasticsearch APIs. This module can be disabled by changing the value of http.enabled to false. The following are the settings (configured in elasticsearch.yml) to control this module − S.No Setting & Description 1 http.port This is a port to access Elasticsearch and it ranges from 9200-9300. 2 http.publish_port This port is for http clients and is also useful in case of firewall. 3 http.bind_host This is a host address for http service. 4 http.publish_host This is a host address for http client. 5 http.max_content_length This is the maximum size of content in an http request. Its default value is 100mb. 6 http.max_initial_line_length This is the maximum size of URL and its default value is 4kb. 7 http.max_header_size This is the maximum http header size and its default value is 8kb. 8 http.compression This enables or disables support for compression and its default value is false. 9 http.pipelinig This enables or disables HTTP pipelining. 10 http.pipelining.max_events This restricts the number of events to be queued before closing an HTTP request. Indices This module maintains the settings, which are set globally for every index. The following settings are mainly related to memory usage − Circuit Breaker This is used for preventing operation from causing an OutOfMemroyError. The setting mainly restricts the JVM heap size. For example, indices.breaker.total.limit setting, which defaults to 70% of JVM heap. Fielddata Cache
Category: elasticsearch
Elasticsearch – Mapping Mapping is the outline of the documents stored in an index. It defines the data type like geo_point or string and format of the fields present in the documents and rules to control the mapping of dynamically added fields. PUT bankaccountdetails { “mappings”:{ “properties”:{ “name”: { “type”:”text”}, “date”:{ “type”:”date”}, “balance”:{ “type”:”double”}, “liability”:{ “type”:”double”} } } } When we run the above code, we get the response as shown below − { “acknowledged” : true, “shards_acknowledged” : true, “index” : “bankaccountdetails” } Field Data Types Elasticsearch supports a number of different datatypes for the fields in a document. The data types used to store fields in Elasticsearch are discussed in detail here. Core Data Types These are the basic data types such as text, keyword, date, long, double, boolean or ip, which are supported by almost all the systems. Complex Data Types These data types are a combination of core data types. These include array, JSON object and nested data type. An example of nested data type is shown below &minus POST /tabletennis/_doc/1 { “group” : “players”, “user” : [ { “first” : “dave”, “last” : “jones” }, { “first” : “kevin”, “last” : “morris” } ] } When we run the above code, we get the response as shown below − { “_index” : “tabletennis”, “_type” : “_doc”, “_id” : “1”, _version” : 2, “result” : “updated”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 1, “_primary_term” : 1 } Another sample code is shown below − POST /accountdetails/_doc/1 { “from_acc”:”7056443341″, “to_acc”:”7032460534″, “date”:”11/1/2016″, “amount”:10000 } When we run the above code, we get the response as shown below − { “_index” : “accountdetails”, “_type” : “_doc”, “_id” : “1”, “_version” : 1, “result” : “created”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 1, “_primary_term” : 1 } We can check the above document by using the following command − GET /accountdetails/_mappings?include_type_name=false Removal of Mapping Types Indices created in Elasticsearch 7.0.0 or later no longer accept a _default_ mapping. Indices created in 6.x will continue to function as before in Elasticsearch 6.x. Types are deprecated in APIs in 7.0.
Elasticsearch – Analysis When a query is processed during a search operation, the content in any index is analyzed by the analysis module. This module consists of analyzer, tokenizer, tokenfilters and charfilters. If no analyzer is defined, then by default the built in analyzers, token, filters and tokenizers get registered with analysis module. In the following example, we use a standard analyzer which is used when no other analyzer is specified. It will analyze the sentence based on the grammar and produce words used in the sentence. POST _analyze { “analyzer”: “standard”, “text”: “Today”s weather is beautiful” } On running the above code, we get the response as shown below − { “tokens” : [ { “token” : “today”s”, “start_offset” : 0, “end_offset” : 7, “type” : “”, “position” : 0 }, { “token” : “weather”, “start_offset” : 8, “end_offset” : 15, “type” : “”, “position” : 1 }, { “token” : “is”, “start_offset” : 16, “end_offset” : 18, “type” : “”, “position” : 2 }, { “token” : “beautiful”, “start_offset” : 19, “end_offset” : 28, “type” : “”, “position” : 3 } ] } Configuring the Standard analyzer We can configure the standard analyser with various parameters to get our custom requirements. In the following example, we configure the standard analyzer to have a max_token_length of 5. For this, we first create an index with the analyser having max_length_token parameter. PUT index_4_analysis { “settings”: { “analysis”: { “analyzer”: { “my_english_analyzer”: { “type”: “standard”, “max_token_length”: 5, “stopwords”: “_english_” } } } } } Next we apply the analyser with a text as shown below. Please note how the token is does not appear as it has two spaces in the beginning and two spaces at the end. For the word “is”, there is a space at the beginning of it and a space at the end of it. Taking all of them, it becomes 4 letters with spaces and that does not make it a word. There should be a nonspace character at least at the beginning or at the end, to make it a word to be counted. POST index_4_analysis/_analyze { “analyzer”: “my_english_analyzer”, “text”: “Today”s weather is beautiful” } On running the above code, we get the response as shown below − { “tokens” : [ { “token” : “today”, “start_offset” : 0, “end_offset” : 5, “type” : “”, “position” : 0 }, { “token” : “s”, “start_offset” : 6, “end_offset” : 7, “type” : “”, “position” : 1 }, { “token” : “weath”, “start_offset” : 8, “end_offset” : 13, “type” : “”, “position” : 2 }, { “token” : “er”, “start_offset” : 13, “end_offset” : 15, “type” : “”, “position” : 3 }, { “token” : “beaut”, “start_offset” : 19, “end_offset” : 24, “type” : “”, “position” : 5 }, { “token” : “iful”, “start_offset” : 24, “end_offset” : 28, “type” : “”, “position” : 6 } ] } The list of various analyzers and their description are given in the table shown below − S.No Analyzer & Description 1 Standard analyzer (standard) stopwords and max_token_length setting can be set for this analyzer. By default, stopwords list is empty and max_token_length is 255. 2 Simple analyzer (simple) This analyzer is composed of lowercase tokenizer. 3 Whitespace analyzer (whitespace) This analyzer is composed of whitespace tokenizer. 4 Stop analyzer (stop) stopwords and stopwords_path can be configured. By default stopwords initialized to English stop words and stopwords_path contains path to a text file with stop words. Tokenizers Tokenizers are used for generating tokens from a text in Elasticsearch. Text can be broken down into tokens by taking whitespace or other punctuations into account. Elasticsearch has plenty of built-in tokenizers, which can be used in custom analyzer. An example of tokenizer that breaks text into terms whenever it encounters a character which is not a letter, but it also lowercases all terms, is shown below − POST _analyze { “tokenizer”: “lowercase”, “text”: “It Was a Beautiful Weather 5 Days ago.” } On running the above code, we get the response as shown below − { “tokens” : [ { “token” : “it”, “start_offset” : 0, “end_offset” : 2, “type” : “word”, “position” : 0 }, { “token” : “was”, “start_offset” : 3, “end_offset” : 6, “type” : “word”, “position” : 1 }, { “token” : “a”, “start_offset” : 7, “end_offset” : 8, “type” : “word”, “position” : 2 }, { “token” : “beautiful”, “start_offset” : 9, “end_offset” : 18, “type” : “word”, “position” : 3 }, { “token” : “weather”, “start_offset” : 19, “end_offset” : 26, “type” : “word”, “position” : 4 }, { “token” : “days”, “start_offset” : 29, “end_offset” : 33, “type” : “word”, “position” : 5 }, { “token” : “ago”, “start_offset” : 34, “end_offset” : 37, “type” : “word”, “position” : 6 } ] } A list of Tokenizers and their descriptions are shown here in the table given below − S.No Tokenizer & Description 1 Standard tokenizer (standard) This is built on grammar based tokenizer and max_token_length can be configured for this tokenizer. 2 Edge NGram tokenizer (edgeNGram) Settings like min_gram, max_gram, token_chars can be set for this tokenizer. 3 Keyword tokenizer (keyword) This generates entire input as an output and buffer_size can be set for this. 4 Letter tokenizer (letter) This captures the whole word until a non-letter is encountered.
Elasticsearch – Index Modules These are the modules which are created for every index and control the settings and behaviour of the indices. For example, how many shards an index can use or the number of replicas a primary shard can have for that index etc. There are two types of index settings − Static − These can be set only at index creation time or on a closed index. Dynamic − These can be changed on a live index. Static Index Settings The following table shows the list of static index settings − Setting Possible value Description index.number_of_shards Defaults to 5, Maximum 1024 The number of primary shards that an index should have. index.shard.check_on_startup Defaults to false. Can be True Whether or not shards should be checked for corruption before opening. index.codec LZ4 compression. Type of compression used to store data. index.routing_partition_size 1 The number of shards a custom routing value can go to. index.load_fixed_bitset_filters_eagerly false Indicates whether cached filters are pre-loaded for nested queries Dynamic Index Settings The following table shows the list of dynamic index settings − Setting Possible value Description index.number_of_replicas Defaults to 1 The number of replicas each primary shard has. index.auto_expand_replicas A dash delimited lower and upper bound (0-5) Auto-expand the number of replicas based on the number of data nodes in the cluster. index.search.idle.after 30seconds How long a shard cannot receive a search or get request until it’s considered search idle. index.refresh_interval 1 second How often to perform a refresh operation, which makes recent changes to the index visible to search.
Elasticsearch – Monitoring To monitor the health of the cluster, the monitoring feature collects metrics from each node and stores them in Elasticsearch Indices. All settings associated with monitoring in Elasticsearch must be set in either the elasticsearch.yml file for each node or, where possible, in the dynamic cluster settings. In order to start monitoring, we need to check the cluster settings, which can be done in the following way − GET _cluster/settings { “persistent” : { }, “transient” : { } } Each component in the stack is responsible for monitoring itself and then forwarding those documents to the Elasticsearch production cluster for both routing and indexing (storage). The routing and indexing processes in Elasticsearch are handled by what are called collectors and exporters. Collectors Collector runs once per each collection interval to obtain data from the public APIs in Elasticsearch that it chooses to monitor. When the data collection is finished, the data is handed in bulk to the exporters to be sent to the monitoring cluster. There is only one collector per data type gathered. Each collector can create zero or more monitoring documents. Exporters Exporters take data collected from any Elastic Stack source and route it to the monitoring cluster. It is possible to configure more than one exporter, but the general and default setup is to use a single exporter. Exporters are configurable at both the node and cluster level. There are two types of exporters in Elasticsearch − local − This exporter routes data back into the same cluster http − The preferred exporter, which you can use to route data into any supported Elasticsearch cluster accessible via HTTP. Before exporters can route monitoring data, they must set up certain Elasticsearch resources. These resources include templates and ingest pipelines
Elasticsearch – Tag Clouds A tag cloud represents text which are mostly keywords and metadata in a visually appealing form. They are aligned in different angles and represented in different colours and font sizes. It helps in finding out the most prominent terms in the data. The prominence can be decided by one or more factors like frequency of the term, uniquness of the tag or based on some weightage attached to specific terms etc. Below we see the steps to create a Tag Cloud. Visualize In Kibana Home screen, we find the option name Visualize which allows us to create visualization and aggregations from the indices stored in Elasticsearch. We choose to add a new visualization and select Tag Cloud as the option shown below − Choose the Metrics The next screen prompts us for choosing the metrics which will be used in creating the Tag Cloud. Here we choose the count as the type of aggregation metric. Then we choose productname field as the keyword to be used as tags. The result shown here shows the pie chart after we apply the selection. Please note the shades of the colour and their values mentioned in the label. Tag Cloud Options On moving to the options tab under Tag Cloud we can see various configuration options to change the look as well as the arrangement of data display in the Tag Cloud. In the below example the Tag Cloud appears with tags spread across both horizontal and vertical directions.
Elasticsearch – Canvas Canvas application is a part of Kibana which allows us to create dynamic, multi-page and pixel perfect data displays. Its ability to create infographics and not just charts and metrices is what makes it unique and appealing. In this chapter we will see various features of canvas and how to use the canvas work pads. Opening a Canvas Go to the Kibana homepage and select the option as shown in the below diagram. It opens up the list of canvas work pads you have. We choose the ecommerce Revenue tracking for our study. Cloning A Workpad We clone the [eCommerce] Revenue Tracking workpad to be used in our study. To clone it, we highlight the row with the name of this workpad and then use the clone button as shown in the diagram below − As a result of the above clone, we will get a new work pad named as [eCommerce] Revenue Tracking – Copy which on opening will show the below infographics. It describes the total sales and Revenue by category along with nice pictures and charts. Modifying the Workpad We can change the style and figures in the workpad by using the options available in the right hand side tab. Here we aim to change the background colour of the workpad by choosing a different colour as shown in the diagram below. The colour selection comes into effect immediately and we get the result as shown below −
Elasticsearch – Area and Bar Charts An area chart is an extension of line chart where the area between the line chart and the axes is highlighted with some colours. A bar chart represents data organized into a range of values and then plotted against the axes. It can consist of either horizontal bars or vertical bars. In this chapter we will see all these three types of graphs that is created using Kibana. As discussed in earlier chapters we will continue to use the data in the ecommerce index. Area Chart In Kibana Home screen, we find the option name Visualize which allows us to create visualization and aggregations from the indices stored in Elasticsearch. We choose to add a new visualization and select Area Chart as the option shown in the image given below. Choose the Metrics The next screen prompts us for choosing the metrics which will be used in creating the Area Chart. Here we choose the sum as the type of aggregation metric. Then we choose total_quantity field as the field to be used as metric. On the X-axis, we chose the order_date field and split the series with the given metric in a size of 5. On running the above configuration, we get the following area chart as the output − Horizontal Bar Chart Similarly, for the Horizontal bar chart we choose new visualization from Kibana Home screen and choose the option for Horizontal Bar. Then we choose the metrics as shown in the image below. Here we choose Sum as the aggregation for the filed named product quantity. Then we choose buckets with date histogram for the field order date. On running the above configuration, we can see a horizontal bar chart as shown below − Vertical Bar Chart For the vertical bar chart, we choose new visualization from Kibana Home screen and choose the option for Vertical Bar. Then we choose the metrics as shown in the image below. Here we choose Sum as the aggregation for the field named product quantity. Then we choose buckets with date histogram for the field order date with a weekly interval. On running the above configuration, a chart will be generated as shown below −
Elasticsearch – Cluster APIs The cluster API is used for getting information about cluster and its nodes and to make changes in them. To call this API, we need to specify the node name, address or _local. GET /_nodes/_local On running the above code, we get the response as shown below − ……………………………………………… cluster_name” : “elasticsearch”, “nodes” : { “FKH-5blYTJmff2rJ_lQOCg” : { “name” : “ubuntu”, “transport_address” : “127.0.0.1:9300”, “host” : “127.0.0.1”, “ip” : “127.0.0.1”, “version” : “7.0.0”, “build_flavor” : “default”, “build_type” : “tar”, “build_hash” : “b7e28a7”, “total_indexing_buffer” : 106502553, “roles” : [ “master”, “data”, “ingest” ], “attributes” : { ……………………………………………… Cluster Health This API is used to get the status on the health of the cluster by appending the ‘health’ keyword. GET /_cluster/health On running the above code, we get the response as shown below − { “cluster_name” : “elasticsearch”, “status” : “yellow”, “timed_out” : false, “number_of_nodes” : 1, “number_of_data_nodes” : 1, “active_primary_shards” : 7, “active_shards” : 7, “relocating_shards” : 0, “initializing_shards” : 0, “unassigned_shards” : 4, “delayed_unassigned_shards” : 0, “number_of_pending_tasks” : 0, “number_of_in_flight_fetch” : 0, “task_max_waiting_in_queue_millis” : 0, “active_shards_percent_as_number” : 63.63636363636363 } Cluster State This API is used to get state information about a cluster by appending the ‘state’ keyword URL. The state information contains version, master node, other nodes, routing table, metadata and blocks. GET /_cluster/state On running the above code, we get the response as shown below − ……………………………………………… { “cluster_name” : “elasticsearch”, “cluster_uuid” : “IzKu0OoVTQ6LxqONJnN2eQ”, “version” : 89, “state_uuid” : “y3BlwvspR1eUQBTo0aBjig”, “master_node” : “FKH-5blYTJmff2rJ_lQOCg”, “blocks” : { }, “nodes” : { “FKH-5blYTJmff2rJ_lQOCg” : { “name” : “ubuntu”, “ephemeral_id” : “426kTGpITGixhEzaM-5Qyg”, “transport } ……………………………………………… Cluster Stats This API helps to retrieve statistics about cluster by using the ‘stats’ keyword. This API returns shard number, store size, memory usage, number of nodes, roles, OS, and file system. GET /_cluster/stats On running the above code, we get the response as shown below − …………………………………………. “cluster_name” : “elasticsearch”, “cluster_uuid” : “IzKu0OoVTQ6LxqONJnN2eQ”, “timestamp” : 1556435464704, “status” : “yellow”, “indices” : { “count” : 7, “shards” : { “total” : 7, “primaries” : 7, “replication” : 0.0, “index” : { “shards” : { “min” : 1, “max” : 1, “avg” : 1.0 }, “primaries” : { “min” : 1, “max” : 1, “avg” : 1.0 }, “replication” : { “min” : 0.0, “max” : 0.0, “avg” : 0.0 } …………………………………………. Cluster Update Settings This API allows you to update the settings of a cluster by using the ‘settings’ keyword. There are two types of settings − persistent (applied across restarts) and transient (do not survive a full cluster restart). Node Stats This API is used to retrieve the statistics of one more nodes of the cluster. Node stats are almost the same as cluster. GET /_nodes/stats On running the above code, we get the response as shown below − { “_nodes” : { “total” : 1, “successful” : 1, “failed” : 0 }, “cluster_name” : “elasticsearch”, “nodes” : { “FKH-5blYTJmff2rJ_lQOCg” : { “timestamp” : 1556437348653, “name” : “ubuntu”, “transport_address” : “127.0.0.1:9300”, “host” : “127.0.0.1”, “ip” : “127.0.0.1:9300”, “roles” : [ “master”, “data”, “ingest” ], “attributes” : { “ml.machine_memory” : “4112797696”, “xpack.installed” : “true”, “ml.max_open_jobs” : “20” }, …………………………………………………………. Nodes hot_threads This API helps you to retrieve information about the current hot threads on each node in cluster. GET /_nodes/hot_threads On running the above code, we get the response as shown below − :::{ubuntu}{FKH-5blYTJmff2rJ_lQOCg}{426kTGpITGixhEzaM5Qyg}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4112797696, xpack.installed=true, ml.max_open_jobs=20} Hot threads at 2019-04-28T07:43:58.265Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
Elasticsearch – Installation In this chapter, we will understand the installation procedure of Elasticsearch in detail. To install Elasticsearch on your local computer, you will have to follow the steps given below − Step 1 − Check the version of java installed on your computer. It should be java 7 or higher. You can check by doing the following − In Windows Operating System (OS) (using command prompt)− > java -version In UNIX OS (Using Terminal) − $ echo $JAVA_HOME Step 2 − Depending on your operating system, download Elasticsearch from www.elastic.co as mentioned below − For windows OS, download ZIP file. For UNIX OS, download TAR file. For Debian OS, download DEB file. For Red Hat and other Linux distributions, download RPN file. APT and Yum utilities can also be used to install Elasticsearch in many Linux distributions. Step 3 − Installation process for Elasticsearch is simple and is described below for different OS − Windows OS− Unzip the zip package and the Elasticsearch is installed. UNIX OS− Extract tar file in any location and the Elasticsearch is installed. $wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch7.0.0-linux-x86_64.tar.gz $tar -xzf elasticsearch-7.0.0-linux-x86_64.tar.gz Using APT utility for Linux OS− Download and install the Public Signing Key $ wget -qo – https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add – Save the repository definition as shown below − $ echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Run update using the following command − $ sudo apt-get update Now you can install by using the following command − $ sudo apt-get install elasticsearch Download and install the Debian package manually using the command given here − $wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch7.0.0-amd64.deb $sudo dpkg -i elasticsearch-7.0.0-amd64.deb0 Using YUM utility for Debian Linux OS Download and install the Public Signing Key − $ rpm –import https://artifacts.elastic.co/GPG-KEY-elasticsearch ADD the following text in the file with .repo suffix in your “/etc/yum.repos.d/” directory. For example, elasticsearch.repo elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md You can now install Elasticsearch by using the following command sudo yum install elasticsearch Step 4 − Go to the Elasticsearch home directory and inside the bin folder. Run the elasticsearch.bat file in case of Windows or you can do the same using command prompt and through terminal in case of UNIX rum Elasticsearch file. In Windows > cd elasticsearch-2.1.0/bin > elasticsearch In Linux $ cd elasticsearch-2.1.0/bin $ ./elasticsearch Note − In case of windows, you might get an error stating JAVA_HOME is not set, please set it in environment variables to “C:Program FilesJavajre1.8.0_31” or the location where you installed java. Step 5 − The default port for Elasticsearch web interface is 9200 or you can change it by changing http.port inside the elasticsearch.yml file present in bin directory. You can check if the server is up and running by browsing http://localhost:9200. It will return a JSON object, which contains the information about the installed Elasticsearch in the following manner − { “name” : “Brain-Child”, “cluster_name” : “elasticsearch”, “version” : { “number” : “2.1.0”, “build_hash” : “72cd1f1a3eee09505e036106146dc1949dc5dc87”, “build_timestamp” : “2015-11-18T22:40:03Z”, “build_snapshot” : false, “lucene_version” : “5.3.1” }, “tagline” : “You Know, for Search” } Step 6 − In this step, let us install Kibana. Follow the respective code given below for installing on Linux and Windows − For Installation on Linux − wget https://artifacts.elastic.co/downloads/kibana/kibana-7.0.0-linuxx86_64.tar.gz tar -xzf kibana-7.0.0-linux-x86_64.tar.gz cd kibana-7.0.0-linux-x86_64/ ./bin/kibana For Installation on Windows − Download Kibana for Windows from Once you click the link, you will find the home page as shown below − Unzip and go to the Kibana home directory and then run it. CD c:kibana-7.0.0-windows-x86_64 .binkibana.bat