Learn Elasticsearch – Quick Guide working project make money

Elasticsearch – Quick Guide Elasticsearch – Basic Concepts Elasticsearch is an Apache Lucene-based search server. It was developed by Shay Banon and published in 2010. It is now maintained by Elasticsearch BV. Its latest version is 7.0.0. Elasticsearch is a real-time distributed and open source full-text search and analytics engine. It is accessible from RESTful web service interface and uses schema less JSON (JavaScript Object Notation) documents to store data. It is built on Java programming language and hence Elasticsearch can run on different platforms. It enables users to explore very large amount of data at very high speed. General Features The general features of Elasticsearch are as follows − Elasticsearch is scalable up to petabytes of structured and unstructured data. Elasticsearch can be used as a replacement of document stores like MongoDB and RavenDB. Elasticsearch uses denormalization to improve the search performance. Elasticsearch is one of the popular enterprise search engines, and is currently being used by many big organizations like Wikipedia, The Guardian, StackOverflow, GitHub etc. Elasticsearch is an open source and available under the Apache license version 2.0. Key Concepts The key concepts of Elasticsearch are as follows − Node It refers to a single running instance of Elasticsearch. Single physical and virtual server accommodates multiple nodes depending upon the capabilities of their physical resources like RAM, storage and processing power. Cluster It is a collection of one or more nodes. Cluster provides collective indexing and search capabilities across all the nodes for entire data. Index It is a collection of different type of documents and their properties. Index also uses the concept of shards to improve the performance. For example, a set of document contains data of a social networking application. Document It is a collection of fields in a specific manner defined in JSON format. Every document belongs to a type and resides inside an index. Every document is associated with a unique identifier called the UID. Shard Indexes are horizontally subdivided into shards. This means each shard contains all the properties of document but contains less number of JSON objects than index. The horizontal separation makes shard an independent node, which can be store in any node. Primary shard is the original horizontal part of an index and then these primary shards are replicated into replica shards. Replicas Elasticsearch allows a user to create replicas of their indexes and shards. Replication not only helps in increasing the availability of data in case of failure, but also improves the performance of searching by carrying out a parallel search operation in these replicas. Advantages Elasticsearch is developed on Java, which makes it compatible on almost every platform. Elasticsearch is real time, in other words after one second the added document is searchable in this engine Elasticsearch is distributed, which makes it easy to scale and integrate in any big organization. Creating full backups are easy by using the concept of gateway, which is present in Elasticsearch. Handling multi-tenancy is very easy in Elasticsearch when compared to Apache Solr. Elasticsearch uses JSON objects as responses, which makes it possible to invoke the Elasticsearch server with a large number of different programming languages. Elasticsearch supports almost every document type except those that do not support text rendering. Disadvantages Elasticsearch does not have multi-language support in terms of handling request and response data (only possible in JSON) unlike in Apache Solr, where it is possible in CSV, XML and JSON formats. Occasionally, Elasticsearch has a problem of Split brain situations. Comparison between Elasticsearch and RDBMS In Elasticsearch, index is similar to tables in RDBMS (Relation Database Management System). Every table is a collection of rows just as every index is a collection of documents in Elasticsearch. The following table gives a direct comparison between these terms− Elasticsearch RDBMS Cluster Database Shard Shard Index Table Field Column Document Row Elasticsearch – Installation In this chapter, we will understand the installation procedure of Elasticsearch in detail. To install Elasticsearch on your local computer, you will have to follow the steps given below − Step 1 − Check the version of java installed on your computer. It should be java 7 or higher. You can check by doing the following − In Windows Operating System (OS) (using command prompt)− > java -version In UNIX OS (Using Terminal) − $ echo $JAVA_HOME Step 2 − Depending on your operating system, download Elasticsearch from www.elastic.co as mentioned below − For windows OS, download ZIP file. For UNIX OS, download TAR file. For Debian OS, download DEB file. For Red Hat and other Linux distributions, download RPN file. APT and Yum utilities can also be used to install Elasticsearch in many Linux distributions. Step 3 − Installation process for Elasticsearch is simple and is described below for different OS − Windows OS− Unzip the zip package and the Elasticsearch is installed. UNIX OS− Extract tar file in any location and the Elasticsearch is installed. $wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch7.0.0-linux-x86_64.tar.gz $tar -xzf elasticsearch-7.0.0-linux-x86_64.tar.gz Using APT utility for Linux OS− Download and install the Public Signing Key $ wget -qo – https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add – Save the repository definition as shown below − $ echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Run update using the following command − $ sudo apt-get update Now you can install by using the following command − $ sudo apt-get install elasticsearch Download and install the Debian package manually using the command given here − $wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch7.0.0-amd64.deb $sudo dpkg -i elasticsearch-7.0.0-amd64.deb0 Using YUM utility for Debian Linux OS Download and install the Public Signing Key − $ rpm –import https://artifacts.elastic.co/GPG-KEY-elasticsearch ADD the following text in the file with .repo suffix in your “/etc/yum.repos.d/” directory. For example, elasticsearch.repo elasticsearch-7.x] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md You can now install Elasticsearch by using the following command sudo yum install elasticsearch Step 4 − Go to the Elasticsearch home directory and inside the bin folder. Run the elasticsearch.bat file in case of Windows or you can do

Learn Elasticsearch – Data Tables working project make money

Elasticsearch – Data Tables The data table is type of visualization that is used to display the raw data of a composed aggregation. There are various types of aggregations that are presented by using Data tables. In order to create a Data Table, we should go through the steps that are discussed here in detail. Visualize In Kibana Home screen we find the option name Visualize which allows us to create visualization and aggregations from the indices stored in Elasticsearch. The following image shows the option. Select Data Table Next, we select the Data Table option from among the various visualization options available. The option is shown in the following image &miuns; Select Metrics We then select the metrics needed for creating the data table visualization. This choice decides the type of aggregation we are going to use. We select the specific fields shown below from the ecommerce data set for this. On running the above configuration for Data Table, we get the result as shown in the image here −

Learn Elasticsearch – Rollup Data working project make money

Elasticsearch – Rollup Data A rollup job is a periodic task that summarizes data from indices specified by an index pattern and rolls it into a new index. In the following example, we create an index named sensor with different date time stamps. Then we create a rollup job to rollup the data from these indices periodically using cron job. PUT /sensor/_doc/1 { “timestamp”: 1516729294000, “temperature”: 200, “voltage”: 5.2, “node”: “a” } On running the above code, we get the following result − { “_index” : “sensor”, “_type” : “_doc”, “_id” : “1”, “_version” : 1, “result” : “created”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 0, “_primary_term” : 1 } Now, add a second document and so on for other documents as well. PUT /sensor-2018-01-01/_doc/2 { “timestamp”: 1413729294000, “temperature”: 201, “voltage”: 5.9, “node”: “a” } Create a Rollup Job PUT _rollup/job/sensor { “index_pattern”: “sensor-*”, “rollup_index”: “sensor_rollup”, “cron”: “*/30 * * * * ?”, “page_size” :1000, “groups” : { “date_histogram”: { “field”: “timestamp”, “interval”: “60m” }, “terms”: { “fields”: [“node”] } }, “metrics”: [ { “field”: “temperature”, “metrics”: [“min”, “max”, “sum”] }, { “field”: “voltage”, “metrics”: [“avg”] } ] } The cron parameter controls when and how often the job activates. When a rollup job’s cron schedule triggers, it will begin rolling up from where it left off after the last activation After the job has run and processed some data, we can use the DSL Query to do some searching. GET /sensor_rollup/_rollup_search { “size”: 0, “aggregations”: { “max_temperature”: { “max”: { “field”: “temperature” } } } }

Learn Elasticsearch – Managing Index Lifecycle working project make money

Elasticsearch – Managing Index Lifecycle Managing the index lifecycle involves performing management actions based on factors like shard size and performance requirements. The index lifecycle management (ILM) APIs enable you to automate how you want to manage your indices over time. This chapter gives a list of ILM APIs and their usage. Policy Management APIs API Name Purpose Example Create lifecycle policy. Creates a lifecycle policy. If the specified policy exists, the policy is replaced and the policy version is incremented. PUT_ilm/policy/policy_id Get lifecycle policy. Returns the specified policy definition. Includes the policy version and last modified date. If no policy is specified, returns all defined policies. GET_ilm/policy/policy_id Delete lifecycle policy Deletes the specified lifecycle policy definition. You cannot delete policies that are currently in use. If the policy is being used to manage any indices, the request fails and returns an error. DELETE_ilm/policy/policy_id Index Management APIs API Name Purpose Example Move to lifecycle step API. Manually moves an index into the specified step and executes that step. POST_ilm/move/index Retry policy. Sets the policy back to the step where the error occurred and executes the step. POST index/_ilm/retry Remove policy from index API edit. Removes the assigned lifecycle policy and stops managing the specified index. If an index pattern is specified, removes the assigned policies from all matching indices. POST index/_ilm/remove Operation Management APIs API Name Purpose Example Get index lifecycle management status API. Returns the status of the ILM plugin. The operation_mode field in the response shows one of three states: STARTED, STOPPING, or STOPPED. GET /_ilm/status Start index lifecycle management API. Starts the ILM plugin if it is currently stopped. ILM is started automatically when the cluster is formed. POST /_ilm/start Stop index lifecycle management API. Halts all lifecycle management operations and stops the ILM plugin. This is useful when you are performing maintenance on the cluster and need to prevent ILM from performing any actions on your indices. POST /_ilm/stop Explain lifecycle API. Retrieves information about the index’s current lifecycle state, such as the currently executing phase, action, and step. Shows when the index entered each one, the definition of the running phase, and information about any failures. GET index/_ilm/explain

Learn Elasticsearch – Region Maps working project make money

Elasticsearch – Region Maps Region Maps show metrics on a geographic Map. It is useful in looking at the data anchored to different geographic regions with varying intensity. The darker shades usually indicate higher values and the lighter shades indicate lower values. The steps to create this visualization are as explained in detail as follows − Visualize In this step we go to the visualize button available in the left bar of the Kibana Home screen and then choosing the option to add a new Visualization. The following screen shows how we choose the region Map option. Choose the Metrics The next screen prompts us for choosing the metrics which will be used in creating the Region Map. Here we choose the Average price as the metric and country_iso_code as the field in the bucket which will be used in creating the visualization. The final result below shows the Region Map once we apply the selection. Please note the shades of the colour and their values mentioned in the label.

Learn Elasticsearch – Monitoring working project make money

Elasticsearch – Monitoring To monitor the health of the cluster, the monitoring feature collects metrics from each node and stores them in Elasticsearch Indices. All settings associated with monitoring in Elasticsearch must be set in either the elasticsearch.yml file for each node or, where possible, in the dynamic cluster settings. In order to start monitoring, we need to check the cluster settings, which can be done in the following way − GET _cluster/settings { “persistent” : { }, “transient” : { } } Each component in the stack is responsible for monitoring itself and then forwarding those documents to the Elasticsearch production cluster for both routing and indexing (storage). The routing and indexing processes in Elasticsearch are handled by what are called collectors and exporters. Collectors Collector runs once per each collection interval to obtain data from the public APIs in Elasticsearch that it chooses to monitor. When the data collection is finished, the data is handed in bulk to the exporters to be sent to the monitoring cluster. There is only one collector per data type gathered. Each collector can create zero or more monitoring documents. Exporters Exporters take data collected from any Elastic Stack source and route it to the monitoring cluster. It is possible to configure more than one exporter, but the general and default setup is to use a single exporter. Exporters are configurable at both the node and cluster level. There are two types of exporters in Elasticsearch − local − This exporter routes data back into the same cluster http − The preferred exporter, which you can use to route data into any supported Elasticsearch cluster accessible via HTTP. Before exporters can route monitoring data, they must set up certain Elasticsearch resources. These resources include templates and ingest pipelines

Learn Elasticsearch – Aggregations working project make money

Elasticsearch – Aggregations The aggregations framework collects all the data selected by the search query and consists of many building blocks, which help in building complex summaries of the data. The basic structure of an aggregation is shown here − “aggregations” : { “” : { “” : { } [,”meta” : { [] } ]? [,”aggregations” : { []+ } ]? } [,”” : { … } ]* } There are different types of aggregations, each with its own purpose. They are discussed in detail in this chapter. Metrics Aggregations These aggregations help in computing matrices from the field’s values of the aggregated documents and sometime some values can be generated from scripts. Numeric matrices are either single-valued like average aggregation or multi-valued like stats. Avg Aggregation This aggregation is used to get the average of any numeric field present in the aggregated documents. For example, POST /schools/_search { “aggs”:{ “avg_fees”:{“avg”:{“field”:”fees”}} } } On running the above code, we get the following result − { “took” : 41, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 2, “relation” : “eq” }, “max_score” : 1.0, “hits” : [ { “_index” : “schools”, “_type” : “school”, “_id” : “5”, “_score” : 1.0, “_source” : { “name” : “Central School”, “description” : “CBSE Affiliation”, “street” : “Nagan”, “city” : “paprola”, “state” : “HP”, “zip” : “176115”, “location” : [ 31.8955385, 76.8380405 ], “fees” : 2200, “tags” : [ “Senior Secondary”, “beautiful campus” ], “rating” : “3.3” } }, { “_index” : “schools”, “_type” : “school”, “_id” : “4”, “_score” : 1.0, “_source” : { “name” : “City Best School”, “description” : “ICSE”, “street” : “West End”, “city” : “Meerut”, “state” : “UP”, “zip” : “250002”, “location” : [ 28.9926174, 77.692485 ], “fees” : 3500, “tags” : [ “fully computerized” ], “rating” : “4.5” } } ] }, “aggregations” : { “avg_fees” : { “value” : 2850.0 } } } Cardinality Aggregation This aggregation gives the count of distinct values of a particular field. POST /schools/_search?size=0 { “aggs”:{ “distinct_name_count”:{“cardinality”:{“field”:”fees”}} } } On running the above code, we get the following result − { “took” : 2, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 2, “relation” : “eq” }, “max_score” : null, “hits” : [ ] }, “aggregations” : { “distinct_name_count” : { “value” : 2 } } } Note − The value of cardinality is 2 because there are two distinct values in fees. Extended Stats Aggregation This aggregation generates all the statistics about a specific numerical field in aggregated documents. POST /schools/_search?size=0 { “aggs” : { “fees_stats” : { “extended_stats” : { “field” : “fees” } } } } On running the above code, we get the following result − { “took” : 8, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 2, “relation” : “eq” }, “max_score” : null, “hits” : [ ] }, “aggregations” : { “fees_stats” : { “count” : 2, “min” : 2200.0, “max” : 3500.0, “avg” : 2850.0, “sum” : 5700.0, “sum_of_squares” : 1.709E7, “variance” : 422500.0, “std_deviation” : 650.0, “std_deviation_bounds” : { “upper” : 4150.0, “lower” : 1550.0 } } } } Max Aggregation This aggregation finds the max value of a specific numeric field in aggregated documents. POST /schools/_search?size=0 { “aggs” : { “max_fees” : { “max” : { “field” : “fees” } } } } On running the above code, we get the following result − { “took” : 16, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 2, “relation” : “eq” }, “max_score” : null, “hits” : [ ] }, “aggregations” : { “max_fees” : { “value” : 3500.0 } } } Min Aggregation This aggregation finds the min value of a specific numeric field in aggregated documents. POST /schools/_search?size=0 { “aggs” : { “min_fees” : { “min” : { “field” : “fees” } } } } On running the above code, we get the following result − { “took” : 2, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 2, “relation” : “eq” }, “max_score” : null, “hits” : [ ] }, “aggregations” : { “min_fees” : { “value” : 2200.0 } } } Sum Aggregation This aggregation calculates the sum of a specific numeric field in aggregated documents. POST /schools/_search?size=0 { “aggs” : { “total_fees” : { “sum” : { “field” : “fees” } } } } On running the above code, we get the following result − { “took” : 8, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 2, “relation” : “eq” }, “max_score” : null, “hits” : [ ] }, “aggregations” : { “total_fees” : { “value” : 5700.0 } } } There are some other metrics aggregations which are used in special cases like geo bounds aggregation and geo centroid aggregation for the purpose of geo location. Stats Aggregations A multi-value metrics aggregation that computes stats over numeric values extracted from the aggregated documents. POST /schools/_search?size=0 { “aggs” : { “grades_stats” : { “stats” : { “field” : “fees” } } } } On running the above code, we get the following result − { “took” : 2, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 2, “relation” : “eq” }, “max_score” : null, “hits” : [ ] }, “aggregations” : {

Learn Elasticsearch – Ingest Node working project make money

Elasticsearch – IngestNode index.blocks.read_only 1 true/false Set to true to make the index and index metadata read only, false to allow writes and metadata changes. Sometimes we need to transform a document before we index it. For instance, we want to remove a field from the document or rename a field and then index it. This is handled by Ingest node. Every node in the cluster has the ability to ingest but it can also be customized to be processed only by specific nodes. Steps Involved There are two steps involved in the working of the ingest node − Creating a pipeline Creating a doc Create a Pipeline First creating a pipeline which contains the processors and then executing the pipeline, as shown below − PUT _ingest/pipeline/int-converter { “description”: “converts the content of the seq field to an integer”, “processors” : [ { “convert” : { “field” : “seq”, “type”: “integer” } } ] } On running the above code, we get the following result − { “acknowledged” : true } Create a Doc Next we create a document using the pipeline converter. PUT /logs/_doc/1?pipeline=int-converter { “seq”:”21″, “name”:”Tutorialspoint”, “Addrs”:”Hyderabad” } On running the above code, we get the response as shown below − { “_index” : “logs”, “_type” : “_doc”, “_id” : “1”, “_version” : 1, “result” : “created”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 0, “_primary_term” : 1 } Next we search for the doc created above by using the GET command as shown below − GET /logs/_doc/1 On running the above code, we get the following result − { “_index” : “logs”, “_type” : “_doc”, “_id” : “1”, “_version” : 1, “_seq_no” : 0, “_primary_term” : 1, “found” : true, “_source” : { “Addrs” : “Hyderabad”, “name” : “Tutorialspoint”, “seq” : 21 } } You can see above that 21 has become an integer. Without Pipeline Now we create a document without using the pipeline. PUT /logs/_doc/2 { “seq”:”11″, “name”:”Tutorix”, “Addrs”:”Secunderabad” } GET /logs/_doc/2 On running the above code, we get the following result − { “_index” : “logs”, “_type” : “_doc”, “_id” : “2”, “_version” : 1, “_seq_no” : 1, “_primary_term” : 1, “found” : true, “_source” : { “seq” : “11”, “name” : “Tutorix”, “Addrs” : “Secunderabad” } } You can see above that 11 is a string without the pipeline being used.

Learn Elasticsearch – SQL Access working project make money

Elasticsearch – SQL Access It is a component that allows SQL-like queries to be executed in real-time against Elasticsearch. You can think of Elasticsearch SQL as a translator, one that understands both SQL and Elasticsearch and makes it easy to read and process data in real-time, at scale by leveraging Elasticsearch capabilities. Advantages of Elasticsearch SQL It has native integration − Each and every query is efficiently executed against the relevant nodes according to the underlying storage. No external parts − No need for additional hardware, processes, runtimes or libraries to query Elasticsearch. Lightweight and efficient − it embraces and exposes SQL to allow proper full-text search, in real-time. Example PUT /schoollist/_bulk?refresh {“index”:{“_id”: “CBSE”}} {“name”: “GleanDale”, “Address”: “JR. Court Lane”, “start_date”: “2011-06-02”, “student_count”: 561} {“index”:{“_id”: “ICSE”}} {“name”: “Top-Notch”, “Address”: “Gachibowli Main Road”, “start_date”: “1989- 05-26”, “student_count”: 482} {“index”:{“_id”: “State Board”}} {“name”: “Sunshine”, “Address”: “Main Street”, “start_date”: “1965-06-01”, “student_count”: 604} On running the above code, we get the response as shown below − { “took” : 277, “errors” : false, “items” : [ { “index” : { “_index” : “schoollist”, “_type” : “_doc”, “_id” : “CBSE”, “_version” : 1, “result” : “created”, “forced_refresh” : true, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 0, “_primary_term” : 1, “status” : 201 } }, { “index” : { “_index” : “schoollist”, “_type” : “_doc”, “_id” : “ICSE”, “_version” : 1, “result” : “created”, “forced_refresh” : true, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 1, “_primary_term” : 1, “status” : 201 } }, { “index” : { “_index” : “schoollist”, “_type” : “_doc”, “_id” : “State Board”, “_version” : 1, “result” : “created”, “forced_refresh” : true, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 2, “_primary_term” : 1, “status” : 201 } } ] } SQL Query The following example shows how we frame the SQL query − POST /_sql?format=txt { “query”: “SELECT * FROM schoollist WHERE start_date On running the above code, we get the response as shown below − Address | name | start_date | student_count ——————–+—————+————————+————— Gachibowli Main Road|Top-Notch |1989-05-26T00:00:00.000Z|482 Main Street |Sunshine |1965-06-01T00:00:00.000Z|604 Note − By changing the SQL query above, you can get different result sets.

Learn Elasticsearch – Query DSL working project make money

Elasticsearch – Query DSL In Elasticsearch, searching is carried out by using query based on JSON. A query is made up of two clauses − Leaf Query Clauses − These clauses are match, term or range, which look for a specific value in specific field. Compound Query Clauses − These queries are a combination of leaf query clauses and other compound queries to extract the desired information. Elasticsearch supports a large number of queries. A query starts with a query key word and then has conditions and filters inside in the form of JSON object. The different types of queries have been described below. Match All Query This is the most basic query; it returns all the content and with the score of 1.0 for every object. POST /schools/_search { “query”:{ “match_all”:{} } } On running the above code, we get the following result − { “took” : 7, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 2, “relation” : “eq” }, “max_score” : 1.0, “hits” : [ { “_index” : “schools”, “_type” : “school”, “_id” : “5”, “_score” : 1.0, “_source” : { “name” : “Central School”, “description” : “CBSE Affiliation”, “street” : “Nagan”, “city” : “paprola”, “state” : “HP”, “zip” : “176115”, “location” : [ 31.8955385, 76.8380405 ], “fees” : 2200, “tags” : [ “Senior Secondary”, “beautiful campus” ], “rating” : “3.3” } }, { “_index” : “schools”, “_type” : “school”, “_id” : “4”, “_score” : 1.0, “_source” : { “name” : “City Best School”, “description” : “ICSE”, “street” : “West End”, “city” : “Meerut”, “state” : “UP”, “zip” : “250002”, “location” : [ 28.9926174, 77.692485 ], “fees” : 3500, “tags” : [ “fully computerized” ], “rating” : “4.5” } } ] } } Full Text Queries These queries are used to search a full body of text like a chapter or a news article. This query works according to the analyser associated with that particular index or document. In this section, we will discuss the different types of full text queries. Match query This query matches a text or phrase with the values of one or more fields. POST /schools*/_search { “query”:{ “match” : { “rating”:”4.5″ } } } On running the above code, we get the response as shown below − { “took” : 44, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 1, “relation” : “eq” }, “max_score” : 0.47000363, “hits” : [ { “_index” : “schools”, “_type” : “school”, “_id” : “4”, “_score” : 0.47000363, “_source” : { “name” : “City Best School”, “description” : “ICSE”, “street” : “West End”, “city” : “Meerut”, “state” : “UP”, “zip” : “250002”, “location” : [ 28.9926174, 77.692485 ], “fees” : 3500, “tags” : [ “fully computerized” ], “rating” : “4.5” } } ] } } Multi Match Query This query matches a text or phrase with more than one field. POST /schools*/_search { “query”:{ “multi_match” : { “query”: “paprola”, “fields”: [ “city”, “state” ] } } } On running the above code, we get the response as shown below − { “took” : 12, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 1, “relation” : “eq” }, “max_score” : 0.9808292, “hits” : [ { “_index” : “schools”, “_type” : “school”, “_id” : “5”, “_score” : 0.9808292, “_source” : { “name” : “Central School”, “description” : “CBSE Affiliation”, “street” : “Nagan”, “city” : “paprola”, “state” : “HP”, “zip” : “176115”, “location” : [ 31.8955385, 76.8380405 ], “fees” : 2200, “tags” : [ “Senior Secondary”, “beautiful campus” ], “rating” : “3.3” } } ] } } Query String Query This query uses query parser and query_string keyword. POST /schools*/_search { “query”:{ “query_string”:{ “query”:”beautiful” } } } On running the above code, we get the response as shown below − { “took” : 60, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 1, “relation” : “eq” }, …………………………………. Term Level Queries These queries mainly deal with structured data like numbers, dates and enums. POST /schools*/_search { “query”:{ “term”:{“zip”:”176115″} } } On running the above code, we get the response as shown below − …………………………….. hits” : [ { “_index” : “schools”, “_type” : “school”, “_id” : “5”, “_score” : 0.9808292, “_source” : { “name” : “Central School”, “description” : “CBSE Affiliation”, “street” : “Nagan”, “city” : “paprola”, “state” : “HP”, “zip” : “176115”, “location” : [ 31.8955385, 76.8380405 ], } } ] ………………………………………….. Range Query This query is used to find the objects having values between the ranges of values given. For this, we need to use operators such as − gte − greater than equal to gt − greater-than lte − less-than equal to lt − less-than For example, observe the code given below − POST /schools*/_search { “query”:{ “range”:{ “rating”:{ “gte”:3.5 } } } } On running the above code, we get the response as shown below − { “took” : 24, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 1, “relation” : “eq” }, “max_score” : 1.0, “hits” : [ { “_index” : “schools”, “_type” : “school”, “_id” : “4”, “_score” : 1.0, “_source” : { “name” : “City Best School”, “description” : “ICSE”, “street” : “West End”, “city” : “Meerut”, “state” : “UP”, “zip” : “250002”, “location” : [ 28.9926174, 77.692485 ], “fees” : 3500, “tags” : [ “fully computerized” ], “rating” : “4.5” } } ] } } There exist other types of term level queries also such as − Exists query − If a certain field has non