Learn Elasticsearch – Populate working project make money

Elasticsearch – Populate In this chapter, let us learn how to add some index, mapping and data to Elasticsearch. Note that some of this data will be used in the examples explained in this tutorial. Create Index You can use the following command to create an index − PUT school Response If the index is created, you can see the following output − {“acknowledged”: true} Add data Elasticsearch will store the documents we add to the index as shown in the following code. The documents are given some IDs which are used in identifying the document. Request Body POST school/_doc/10 { “name”:”Saint Paul School”, “description”:”ICSE Afiliation”, “street”:”Dawarka”, “city”:”Delhi”, “state”:”Delhi”, “zip”:”110075″, “location”:[28.5733056, 77.0122136], “fees”:5000, “tags”:[“Good Faculty”, “Great Sports”], “rating”:”4.5″ } Response { “_index” : “school”, “_type” : “_doc”, “_id” : “10”, “_version” : 1, “result” : “created”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 2, “_primary_term” : 1 } Here, we are adding another similar document. POST school/_doc/16 { “name”:”Crescent School”, “description”:”State Board Affiliation”, “street”:”Tonk Road”, “city”:”Jaipur”, “state”:”RJ”, “zip”:”176114″,”location”:[26.8535922,75.7923988], “fees”:2500, “tags”:[“Well equipped labs”], “rating”:”4.5″ } Response { “_index” : “school”, “_type” : “_doc”, “_id” : “16”, “_version” : 1, “result” : “created”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 9, “_primary_term” : 7 } In this way, we will keep adding any example data that we need for our working in the upcoming chapters. Adding Sample Data in Kibana Kibana is a GUI driven tool for accessing the data and creating the visualization. In this section, let us understand how we can add sample data to it. In the Kibana home page, choose the following option to add sample ecommerce data − The next screen will show some visualization and a button to Add data − Clicking on Add Data will show the following screen which confirms the data has been added to an index named eCommerce.

Learn Elasticsearch – Home working project make money

Elasticsearch Tutorial Job Search Elasticsearch is a real-time distributed and open source full-text search and analytics engine. It is used in Single Page Application (SPA) projects. Elasticsearch is an open source developed in Java and used by many big organizations around the world. It is licensed under the Apache license version 2.0. In this tutorial, you will learn in detail the basics of Elasticsearch and its important features. Audience This tutorial is designed for software professionals who want to learn the basics of Elasticsearch and its programming concepts in simple and easy steps. It describes the components of Elasticsearch with suitable examples. This tutorial is designed to configure the HR module of SAP in an easy and systematic way. Packed with plenty of screenshots, it will be useful for consultants as well as end-users. Prerequisites Before you begin with this tutorial, you should have a basic understanding of Java, JSON, search engines, and web technologies. The interaction with Elasticsearch is through RESTful API; therefore, it is always recommended to have knowledge of RESTful API. If you are new to any of these concepts, we suggest you to take the help of tutorials based on these topics before you start with Elasticsearch.

Learn Migration between Versions working project make money

Elasticsearch – Migration between Versions In any system or software, when we are upgrading to newer version, we need to follow a few steps to maintain the application settings, configurations, data and other things. These steps are required to make the application stable in new system or to maintain the integrity of data (prevent data from getting corrupt). You need to follow the following steps to upgrade Elasticsearch − Read Upgrade docs from Test the upgraded version in your non production environments like in UAT, E2E, SIT or DEV environment. Note that rollback to previous Elasticsearch version is not possible without data backup. Hence, a data backup is recommended before upgrading to a higher version. We can upgrade using full cluster restart or rolling upgrade. Rolling upgrade is for new versions. Note that there is no service outage, when you are using rolling upgrade method for migration. Steps for Upgrade Test the upgrade in a dev environment before upgrading your production cluster. Back up your data. You cannot roll back to an earlier version unless you have a snapshot of your data. Consider closing machine learning jobs before you start the upgrade process. While machine learning jobs can continue to run during a rolling upgrade, it increases the overhead on the cluster during the upgrade process. Upgrade the components of your Elastic Stack in the following order − Elasticsearch Kibana Logstash Beats APM Server Upgrading from 6.6 or Earlier To upgrade directly to Elasticsearch 7.1.0 from versions 6.0-6.6, you must manually reindex any 5.x indices you need to carry forward, and perform a full cluster restart. Full Cluster Restart The process of full cluster restart involves shutting down each node in the cluster, upgrading each node to 7x and then restarting the cluster. Following are the high level steps that need to be carried out for full cluster restart − Disable shard allocation Stop indexing and perform a synced flush Shutdown all nodes Upgrade all nodes Upgrade any plugins Start each upgraded node Wait for all nodes to join the cluster and report a status of yellow Re-enable allocation Once allocation is re-enabled, the cluster starts allocating the replica shards to the data nodes. At this point, it is safe to resume indexing and searching, but your cluster will recover more quickly if you can wait until all primary and replica shards have been successfully allocated and the status of all nodes is green.

Learn Elasticsearch – Index APIs working project make money

Elasticsearch – Index APIs These APIs are responsible for managing all the aspects of the index like settings, aliases, mappings, index templates. Create Index This API helps you to create an index. An index can be created automatically when a user is passing JSON objects to any index or it can be created before that. To create an index, you just need to send a PUT request with settings, mappings and aliases or just a simple request without body. PUT colleges On running the above code, we get the output as shown below − { “acknowledged” : true, “shards_acknowledged” : true, “index” : “colleges” } We can also add some settings to the above command − PUT colleges { “settings” : { “index” : { “number_of_shards” : 3, “number_of_replicas” : 2 } } } On running the above code, we get the output as shown below − { “acknowledged” : true, “shards_acknowledged” : true, “index” : “colleges” } Delete Index This API helps you to delete any index. You just need to pass a delete request with the name of that particular Index. DELETE /colleges You can delete all indices by just using _all or *. Get Index This API can be called by just sending get request to one or more than one indices. This returns the information about index. GET colleges On running the above code, we get the output as shown below − { “colleges” : { “aliases” : { “alias_1” : { }, “alias_2” : { “filter” : { “term” : { “user” : “pkay” } }, “index_routing” : “pkay”, “search_routing” : “pkay” } }, “mappings” : { }, “settings” : { “index” : { “creation_date” : “1556245406616”, “number_of_shards” : “1”, “number_of_replicas” : “1”, “uuid” : “3ExJbdl2R1qDLssIkwDAug”, “version” : { “created” : “7000099” }, “provided_name” : “colleges” } } } } You can get the information of all the indices by using _all or *. Index Exist Existence of an index can be determined by just sending a get request to that index. If the HTTP response is 200, it exists; if it is 404, it does not exist. HEAD colleges On running the above code, we get the output as shown below − 200-OK Index Settings You can get the index settings by just appending _settings keyword at the end of URL. GET /colleges/_settings On running the above code, we get the output as shown below − { “colleges” : { “settings” : { “index” : { “creation_date” : “1556245406616”, “number_of_shards” : “1”, “number_of_replicas” : “1”, “uuid” : “3ExJbdl2R1qDLssIkwDAug”, “version” : { “created” : “7000099” }, “provided_name” : “colleges” } } } } Index Stats This API helps you to extract statistics about a particular index. You just need to send a get request with the index URL and _stats keyword at the end. GET /_stats On running the above code, we get the output as shown below − ……………………………………………… }, “request_cache” : { “memory_size_in_bytes” : 849, “evictions” : 0, “hit_count” : 1171, “miss_count” : 4 }, “recovery” : { “current_as_source” : 0, “current_as_target” : 0, “throttle_time_in_millis” : 0 } } ……………………………………………… Flush The flush process of an index makes sure that any data that is currently only persisted in the transaction log is also permanently persisted in Lucene. This reduces recovery times as that data does not need to be reindexed from the transaction logs after the Lucene indexed is opened. POST colleges/_flush On running the above code, we get the output as shown below − { “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 } }

Learn Elasticsearch – Document APIs working project make money

Elasticsearch – Document APIs Elasticsearch provides single document APIs and multi-document APIs, where the API call is targeting a single document and multiple documents respectively. Index API It helps to add or update the JSON document in an index when a request is made to that respective index with specific mapping. For example, the following request will add the JSON object to index schools and under school mapping − PUT schools/_doc/5 { name”:”City School”, “description”:”ICSE”, “street”:”West End”, “city”:”Meerut”, “state”:”UP”, “zip”:”250002″, “location”:[28.9926174, 77.692485], “fees”:3500, “tags”:[“fully computerized”], “rating”:”4.5″ } On running the above code, we get the following result − { “_index” : “schools”, “_type” : “_doc”, “_id” : “5”, “_version” : 1, “result” : “created”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 2, “_primary_term” : 1 } Automatic Index Creation When a request is made to add JSON object to a particular index and if that index does not exist, then this API automatically creates that index and also the underlying mapping for that particular JSON object. This functionality can be disabled by changing the values of following parameters to false, which are present in elasticsearch.yml file. action.auto_create_index:false index.mapper.dynamic:false You can also restrict the auto creation of index, where only index name with specific patterns are allowed by changing the value of the following parameter − action.auto_create_index:+acc*,-bank* Note − Here + indicates allowed and – indicates not allowed. Versioning Elasticsearch also provides version control facility. We can use a version query parameter to specify the version of a particular document. PUT schools/_doc/5?version=7&version_type=external { “name”:”Central School”, “description”:”CBSE Affiliation”, “street”:”Nagan”, “city”:”paprola”, “state”:”HP”, “zip”:”176115″, “location”:[31.8955385, 76.8380405], “fees”:2200, “tags”:[“Senior Secondary”, “beautiful campus”], “rating”:”3.3″ } On running the above code, we get the following result − { “_index” : “schools”, “_type” : “_doc”, “_id” : “5”, “_version” : 7, “result” : “updated”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 3, “_primary_term” : 1 } Versioning is a real-time process and it is not affected by the real time search operations. There are two most important types of versioning − Internal Versioning Internal versioning is the default version that starts with 1 and increments with each update, deletes included. External Versioning It is used when the versioning of the documents is stored in an external system like third party versioning systems. To enable this functionality, we need to set version_type to external. Here Elasticsearch will store version number as designated by the external system and will not increment them automatically. Operation Type The operation type is used to force a create operation. This helps to avoid the overwriting of existing document. PUT chapter/_doc/1?op_type=create { “Text”:”this is chapter one” } On running the above code, we get the following result − { “_index” : “chapter”, “_type” : “_doc”, “_id” : “1”, “_version” : 1, “result” : “created”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 0, “_primary_term” : 1 } Automatic ID generation When ID is not specified in index operation, then Elasticsearch automatically generates id for that document. POST chapter/_doc/ { “user” : “tpoint”, “post_date” : “2018-12-25T14:12:12”, “message” : “Elasticsearch Tutorial” } On running the above code, we get the following result − { “_index” : “chapter”, “_type” : “_doc”, “_id” : “PVghWGoB7LiDTeV6LSGu”, “_version” : 1, “result” : “created”, “_shards” : { “total” : 2, “successful” : 1, “failed” : 0 }, “_seq_no” : 1, “_primary_term” : 1 } Get API API helps to extract type JSON object by performing a get request for a particular document. pre class=”prettyprint notranslate” > GET schools/_doc/5 On running the above code, we get the following result − { “_index” : “schools”, “_type” : “_doc”, “_id” : “5”, “_version” : 7, “_seq_no” : 3, “_primary_term” : 1, “found” : true, “_source” : { “name” : “Central School”, “description” : “CBSE Affiliation”, “street” : “Nagan”, “city” : “paprola”, “state” : “HP”, “zip” : “176115”, “location” : [ 31.8955385, 76.8380405 ], “fees” : 2200, “tags” : [ “Senior Secondary”, “beautiful campus” ], “rating” : “3.3” } } This operation is real time and does not get affected by the refresh rate of Index. You can also specify the version, then Elasticsearch will fetch that version of document only. You can also specify the _all in the request, so that the Elasticsearch can search for that document id in every type and it will return the first matched document. You can also specify the fields you want in your result from that particular document. GET schools/_doc/5?_source_includes=name,fees On running the above code, we get the following result − { “_index” : “schools”, “_type” : “_doc”, “_id” : “5”, “_version” : 7, “_seq_no” : 3, “_primary_term” : 1, “found” : true, “_source” : { “fees” : 2200, “name” : “Central School” } } You can also fetch the source part in your result by just adding _source part in your get request. GET schools/_doc/5?_source On running the above code, we get the following result − { “_index” : “schools”, “_type” : “_doc”, “_id” : “5”, “_version” : 7, “_seq_no” : 3, “_primary_term” : 1, “found” : true, “_source” : { “name” : “Central School”, “description” : “CBSE Affiliation”, “street” : “Nagan”, “city” : “paprola”, “state” : “HP”, “zip” : “176115”, “location” : [ 31.8955385, 76.8380405 ], “fees” : 2200, “tags” : [ “Senior Secondary”, “beautiful campus” ], “rating” : “3.3” } } You can also refresh the shard before doing get operation by set refresh parameter to true. Delete API You can delete a particular index, mapping or a document by sending a HTTP DELETE request to Elasticsearch. DELETE schools/_doc/4 On running the above code, we get the following result − { “found”:true, “_index”:”schools”, “_type”:”school”, “_id”:”4″, “_version”:2, “_shards”:{“total”:2, “successful”:1, “failed”:0} } Version of the document can be specified to delete that particular version. Routing parameter can be specified to delete the document from a particular user and the operation fails if the document does not belong to that particular user. In this

Learn Elasticsearch – Search APIs working project make money

Elasticsearch – Search APIs This API is used to search content in Elasticsearch. A user can search by sending a get request with query string as a parameter or they can post a query in the message body of post request. Mainly all the search APIS are multi-index, multi-type. Multi-Index Elasticsearch allows us to search for the documents present in all the indices or in some specific indices. For example, if we need to search all the documents with a name that contains central, we can do as shown here − GET /_all/_search?q=city:paprola On running the above code, we get the following response − { “took” : 33, “timed_out” : false, “_shards” : { “total” : 7, “successful” : 7, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 1, “relation” : “eq” }, “max_score” : 0.9808292, “hits” : [ { “_index” : “schools”, “_type” : “school”, “_id” : “5”, “_score” : 0.9808292, “_source” : { “name” : “Central School”, “description” : “CBSE Affiliation”, “street” : “Nagan”, “city” : “paprola”, “state” : “HP”, “zip” : “176115”, “location” : [ 31.8955385, 76.8380405 ], “fees” : 2200, “tags” : [ “Senior Secondary”, “beautiful campus” ], “rating” : “3.3” } } ] } } URI Search Many parameters can be passed in a search operation using Uniform Resource Identifier − S.No Parameter & Description 1 Q This parameter is used to specify query string. 2 lenient This parameter is used to specify query string.Format based errors can be ignored by just setting this parameter to true. It is false by default. 3 fields This parameter is used to specify query string. 4 sort We can get sorted result by using this parameter, the possible values for this parameter is fieldName, fieldName:asc/fieldname:desc 5 timeout We can restrict the search time by using this parameter and response only contains the hits in that specified time. By default, there is no timeout. 6 terminate_after We can restrict the response to a specified number of documents for each shard, upon reaching which the query will terminate early. By default, there is no terminate_after. 7 from The starting from index of the hits to return. Defaults to 0. 8 size It denotes the number of hits to return. Defaults to 10. Request Body Search We can also specify query using query DSL in request body and there are many examples already given in previous chapters. One such example is given here − POST /schools/_search { “query”:{ “query_string”:{ “query”:”up” } } } On running the above code, we get the following response − { “took” : 11, “timed_out” : false, “_shards” : { “total” : 1, “successful” : 1, “skipped” : 0, “failed” : 0 }, “hits” : { “total” : { “value” : 1, “relation” : “eq” }, “max_score” : 0.47000363, “hits” : [ { “_index” : “schools”, “_type” : “school”, “_id” : “4”, “_score” : 0.47000363, “_source” : { “name” : “City Best School”, “description” : “ICSE”, “street” : “West End”, “city” : “Meerut”, “state” : “UP”, “zip” : “250002”, “location” : [ 28.9926174, 77.692485 ], “fees” : 3500, “tags” : [ “fully computerized” ], “rating” : “4.5” } } ] } }

Learn Elasticsearch – API Conventions working project make money

Elasticsearch – API Conventions Application Programming Interface (API) in web is a group of function calls or other programming instructions to access the software component in that particular web application. For example, Facebook API helps a developer to create applications by accessing data or other functionalities from Facebook; it can be date of birth or status update. Elasticsearch provides a REST API, which is accessed by JSON over HTTP. Elasticsearch uses some conventions which we shall discuss now. Multiple Indices Most of the operations, mainly searching and other operations, in APIs are for one or more than one indices. This helps the user to search in multiple places or all the available data by just executing a query once. Many different notations are used to perform operations in multiple indices. We will discuss a few of them here in this chapter. Comma Separated Notation POST /index1,index2,index3/_search Request Body { “query”:{ “query_string”:{ “query”:”any_string” } } } Response JSON objects from index1, index2, index3 having any_string in it. _all Keyword for All Indices POST /_all/_search Request Body { “query”:{ “query_string”:{ “query”:”any_string” } } } Response JSON objects from all indices and having any_string in it. Wildcards ( * , + , –) POST /school*/_search Request Body { “query”:{ “query_string”:{ “query”:”CBSE” } } } Response JSON objects from all indices which start with school having CBSE in it. Alternatively, you can use the following code as well − POST /school*,-schools_gov /_search Request Body { “query”:{ “query_string”:{ “query”:”CBSE” } } } Response JSON objects from all indices which start with “school” but not from schools_gov and having CBSE in it. There are also some URL query string parameters − ignore_unavailable − No error will occur or no operation will be stopped, if the one or more index(es) present in the URL does not exist. For example, schools index exists, but book_shops does not exist. POST /school*,book_shops/_search Request Body { “query”:{ “query_string”:{ “query”:”CBSE” } } } Request Body { “error”:{ “root_cause”:[{ “type”:”index_not_found_exception”, “reason”:”no such index”, “resource.type”:”index_or_alias”, “resource.id”:”book_shops”, “index”:”book_shops” }], “type”:”index_not_found_exception”, “reason”:”no such index”, “resource.type”:”index_or_alias”, “resource.id”:”book_shops”, “index”:”book_shops” },”status”:404 } Consider the following code − POST /school*,book_shops/_search?ignore_unavailable = true Request Body { “query”:{ “query_string”:{ “query”:”CBSE” } } } Response (no error) JSON objects from all indices which start with school having CBSE in it. allow_no_indices true value of this parameter will prevent error, if a URL with wildcard results in no indices. For example, there is no index that starts with schools_pri − POST /schools_pri*/_search?allow_no_indices = true Request Body { “query”:{ “match_all”:{} } } Response (No errors) { “took”:1,”timed_out”: false, “_shards”:{“total”:0, “successful”:0, “failed”:0}, “hits”:{“total”:0, “max_score”:0.0, “hits”:[]} } expand_wildcards This parameter decides whether the wildcards need to be expanded to open indices or closed indices or perform both. The value of this parameter can be open and closed or none and all. For example, close index schools − POST /schools/_close Response {“acknowledged”:true} Consider the following code − POST /school*/_search?expand_wildcards = closed Request Body { “query”:{ “match_all”:{} } } Response { “error”:{ “root_cause”:[{ “type”:”index_closed_exception”, “reason”:”closed”, “index”:”schools” }], “type”:”index_closed_exception”, “reason”:”closed”, “index”:”schools” }, “status”:403 } Date Math Support in Index Names Elasticsearch offers a functionality to search indices according to date and time. We need to specify date and time in a specific format. For example, accountdetail-2015.12.30, index will store the bank account details of 30th December 2015. Mathematical operations can be performed to get details for a particular date or a range of date and time. Format for date math index name − <static_name{date_math_expr{date_format|time_zone}}> /<accountdetail-{now-2d{YYYY.MM.dd|utc}}>/_search static_name is a part of expression which remains the same in every date math index like account detail. date_math_expr contains the mathematical expression that determines the date and time dynamically like now-2d. date_format contains the format in which the date is written in index like YYYY.MM.dd. If today’s date is 30th December 2015, then <accountdetail-{now-2d{YYYY.MM.dd}}> will return accountdetail-2015.12.28. Expression Resolves to <accountdetail-{now-d}> accountdetail-2015.12.29 <accountdetail-{now-M}> accountdetail-2015.11.30 <accountdetail-{now{YYYY.MM}}> accountdetail-2015.12 We will now see some of the common options available in Elasticsearch that can be used to get the response in a specified format. Pretty Results We can get response in a well-formatted JSON object by just appending a URL query parameter, i.e., pretty = true. POST /schools/_search?pretty = true Request Body { “query”:{ “match_all”:{} } } Response …………………….. { “_index” : “schools”, “_type” : “school”, “_id” : “1”, “_score” : 1.0, “_source”:{ “name”:”Central School”, “description”:”CBSE Affiliation”, “street”:”Nagan”, “city”:”paprola”, “state”:”HP”, “zip”:”176115″, “location”: [31.8955385, 76.8380405], “fees”:2000, “tags”:[“Senior Secondary”, “beautiful campus”], “rating”:”3.5″ } } …………………. Human Readable Output This option can change the statistical responses either into human readable form (If human = true) or computer readable form (if human = false). For example, if human = true then distance_kilometer = 20KM and if human = false then distance_meter = 20000, when response needs to be used by another computer program. Response Filtering We can filter the response to less fields by adding them in the field_path parameter. For example, POST /schools/_search?filter_path = hits.total Request Body { “query”:{ “match_all”:{} } } Response {“hits”:{“total”:3}}

Learn Elasticsearch – Basic Concepts working project make money

Elasticsearch – Basic Concepts Elasticsearch is an Apache Lucene-based search server. It was developed by Shay Banon and published in 2010. It is now maintained by Elasticsearch BV. Its latest version is 7.0.0. Elasticsearch is a real-time distributed and open source full-text search and analytics engine. It is accessible from RESTful web service interface and uses schema less JSON (JavaScript Object Notation) documents to store data. It is built on Java programming language and hence Elasticsearch can run on different platforms. It enables users to explore very large amount of data at very high speed. General Features The general features of Elasticsearch are as follows − Elasticsearch is scalable up to petabytes of structured and unstructured data. Elasticsearch can be used as a replacement of document stores like MongoDB and RavenDB. Elasticsearch uses denormalization to improve the search performance. Elasticsearch is one of the popular enterprise search engines, and is currently being used by many big organizations like Wikipedia, The Guardian, StackOverflow, GitHub etc. Elasticsearch is an open source and available under the Apache license version 2.0. Key Concepts The key concepts of Elasticsearch are as follows − Node It refers to a single running instance of Elasticsearch. Single physical and virtual server accommodates multiple nodes depending upon the capabilities of their physical resources like RAM, storage and processing power. Cluster It is a collection of one or more nodes. Cluster provides collective indexing and search capabilities across all the nodes for entire data. Index It is a collection of different type of documents and their properties. Index also uses the concept of shards to improve the performance. For example, a set of document contains data of a social networking application. Document It is a collection of fields in a specific manner defined in JSON format. Every document belongs to a type and resides inside an index. Every document is associated with a unique identifier called the UID. Shard Indexes are horizontally subdivided into shards. This means each shard contains all the properties of document but contains less number of JSON objects than index. The horizontal separation makes shard an independent node, which can be store in any node. Primary shard is the original horizontal part of an index and then these primary shards are replicated into replica shards. Replicas Elasticsearch allows a user to create replicas of their indexes and shards. Replication not only helps in increasing the availability of data in case of failure, but also improves the performance of searching by carrying out a parallel search operation in these replicas. Advantages Elasticsearch is developed on Java, which makes it compatible on almost every platform. Elasticsearch is real time, in other words after one second the added document is searchable in this engine Elasticsearch is distributed, which makes it easy to scale and integrate in any big organization. Creating full backups are easy by using the concept of gateway, which is present in Elasticsearch. Handling multi-tenancy is very easy in Elasticsearch when compared to Apache Solr. Elasticsearch uses JSON objects as responses, which makes it possible to invoke the Elasticsearch server with a large number of different programming languages. Elasticsearch supports almost every document type except those that do not support text rendering. Disadvantages Elasticsearch does not have multi-language support in terms of handling request and response data (only possible in JSON) unlike in Apache Solr, where it is possible in CSV, XML and JSON formats. Occasionally, Elasticsearch has a problem of Split brain situations. Comparison between Elasticsearch and RDBMS In Elasticsearch, index is similar to tables in RDBMS (Relation Database Management System). Every table is a collection of rows just as every index is a collection of documents in Elasticsearch. The following table gives a direct comparison between these terms− Elasticsearch RDBMS Cluster Database Shard Shard Index Table Field Column Document Row