Kibana – Working With Graphs ”; Previous Next In this chapter, we will discuss the two types of graphs used in visualization − Line Graph Area Line Graph To start with, let us create a visualization, choosing a line graph to display the data and use contriesdata as the index. We need to create the Y -axis and X-axis and the details for the same are shown below − For Y-axis Observe that we have taken Max as the Aggregation. So here we are going to show data presentation in line graph. Now,we will plot graph that will show the max population country wise. The field we have taken is Population since we need maximum population country wise. For X-axis On x-axis we have taken Terms as Aggregation, Country.keyword as Field and metric:Max Population for Order By, and order size is 5. So it will plot the 5 top countries with max population. After applying the changes, you can see the line graph as shown below − So we have Max population in China, followed by India, United States, Indonesia and Brazil as the top 5 countries in population. Now, let us save this line graph so that we can use in dashboard later. Click Confirm Save and you can save the visualization. Area Graph Go to visualization and choose area with index as countriesdata. We need to select the Y-axis and X-axis. We will plot area graph for max area for country wise. So here the X- axis and Y-axis will be as shown below − After you click the apply changes button, the output that we can see is as shown below − From the graph, we can observe that Russia has the highest area, followed by Canada, United States , China and Brazil. Save the visualization to use it later. Print Page Previous Next Advertisements ”;
Category: kibana
Kibana – Introduction To Elk Stack ”; Previous Next Kibana is an open source visualization tool mainly used to analyze a large volume of logs in the form of line graph, bar graph, pie charts, heatmaps etc. Kibana works in sync with Elasticsearch and Logstash which together forms the so called ELK stack. ELK stands for Elasticsearch, Logstash, and Kibana. ELK is one of the popular log management platform used worldwide for log analysis. In the ELK stack − Logstash extracts the logging data or other events from different input sources. It processes the events and later stores it in Elasticsearch. Kibana is a visualization tool, which accesses the logs from Elasticsearch and is able to display to the user in the form of line graph, bar graph, pie charts etc. In this tutorial, we will work closely with Kibana and Elasticsearch and visualize the data in different forms. In this chapter, let us understand how to work with ELK stack together. Besides, you will also see how to − Load CSV data from Logstash to Elasticsearch. Use indices from Elasticsearch in Kibana. Load CSV data from Logstash to Elasticsearch We are going to use CSV data to upload data using Logstash to Elasticsearch. To work on data analysis, we can get data from kaggle.com website. Kaggle.com site has all types of data uploaded and users can use it to work on data analysis. We have taken the countries.csv data from here: https://www.kaggle.com/fernandol/countries-of-the-world. You can download the csv file and use it. The csv file which we are going to use has following details. File name − countriesdata.csv Columns − “Country”,”Region”,”Population”,”Area” You can also create a dummy csv file and use it. We will be using logstash to dump this data from countriesdata.csv to elasticsearch. Start the elasticsearch and Kibana in your terminal and keep it running. We have to create the config file for logstash which will have details about the columns of the CSV file and also other details as shown in the logstash-config file given below − input { file { path => “C:/kibanaproject/countriesdata.csv” start_position => “beginning” sincedb_path => “NUL” } } filter { csv { separator => “,” columns => [“Country”,”Region”,”Population”,”Area”] } mutate {convert => [“Population”, “integer”]} mutate {convert => [“Area”, “integer”]} } output { elasticsearch { hosts => [“localhost:9200”] => “countriesdata-%{+dd.MM.YYYY}” } stdout {codec => json_lines } } In the config file, we have created 3 components − Input We need to specify the path of the input file which in our case is a csv file. The path where the csv file is stored is given to the path field. Filter Will have the csv component with separator used which in our case is comma, and also the columns available for our csv file. As logstash considers all the data coming in as string , in-case we want any column to be used as integer , float the same has to be specified using mutate as shown above. Output For output, we need to specify where we need to put the data. Here, in our case we are using elasticsearch. The data required to be given to the elasticsearch is the hosts where it is running, we have mentioned it as localhost. The next field in is index which we have given the name as countries-currentdate. We have to use the same index in Kibana once the data is updated in Elasticsearch. Save the above config file as logstash_countries.config. Note that we need to give the path of this config to logstash command in the next step. To load the data from the csv file to elasticsearch, we need to start the elasticsearch server − Now, run http://localhost:9200 in the browser to confirm if elasticsearch is running successfully. We have elasticsearch running. Now go to the path where logstash is installed and run following command to upload the data to elasticsearch. > logstash -f logstash_countries.conf The above screen shows data loading from the CSV file to Elasticsearch. To know if we have the index created in Elasticsearch we can check same as follows − We can see the countriesdata-28.12.2018 index created as shown above. The details of the index − countries-28.12.2018 is as follows − Note that the mapping details with properties are created when data is uploaded from logstash to elasticsearch. Use Data from Elasticsearch in Kibana Currently, we have Kibana running on localhost, port 5601 − http://localhost:5601. The UI of Kibana is shown here − Note that we already have Kibana connected to Elasticsearch and we should be able to see index :countries-28.12.2018 inside Kibana. In the Kibana UI, click on Management Menu option on left side − Now, click Index Management − The indices present in Elasticsearch are displayed in index management. The index we are going to use in Kibana is countriesdata-28.12.2018. Thus, as we already have the elasticsearch index in Kibana, next will understand how to use the index in Kibana to visualize data in the form of pie chart, bar graph, line chart etc. Print Page Previous Next Advertisements ”;
Kibana – Create Visualization ”; Previous Next We can visualize the data we have in the form of bar charts, line graphs, pie charts etc. In this chapter, we will understand how to create visualization. Create Visualization Go to Kibana Visualization as shown below − We do not have any visualization created, so it shows blank and there is a button to create one. Click the button Create a visualization as shown in the screen above and it will take you to the screen as shown below − Here you can select the option which you need to visualize your data. We will understand each one of them in detail in the upcoming chapters. Right now will select pie chart to start with. Once you select the visualization type, now you need to select the index on which you want to work on, and it will take you the screen as shown below − Now we have a default pie chart. We will use the countriesdata-28.12.2018 to get the count of regions available in the countries data in pie chart format. Bucket and Metric Aggregation The left side has metrics, which we will select as count. In Buckets, there are 2 options Split slices and split chart. We will use the option Split slices. Now, select Split Slices and it will display following options − Now, select the Aggregation as Terms and it will display more options to be entered as follows − The Fields dropdown will have all the field from the index:countriesdata chosen. We have chosen the Region field and Order By. Note that we have chosen, the metric Count for Order By. We will order it Descending and the size we have taken as 10. It means here, we will get the top 10 regions count from the countries index. Now, click the analyse button as highlighted below and you should see the pie chart updated on right side. Pie chart display All the regions are listed at the right top corner with colours and the same colour is shown in the pie chart. If you mouse over the pie chart it will give the count of the region and also the name of the region as shown below − So it tells us that 22.77% of region is occupied by Sub-Saharan Afri from the countries data we have uploaded. Asia region covers 12.5% and the count is 28. Now we can save the visualization by clicking on the save button on top right corner as shown below − Now, save the visualization so that it can be used later. We can also get the data as we want by using the search option as shown below − We have filtered data for countries starting with Aus*. We will understand more on pie-chart and other visualization in the upcoming chapters. Print Page Previous Next Advertisements ”;
Kibana – Discussion
Discuss Kibana ”; Previous Next Kibana is an open source browser based visualization tool mainly used to analyze large volume of logs in the form of line graph, bar graph, pie charts, heat maps, region maps, coordinate maps, gauge, goals, timelion etc. The visualization makes it easy to predict or to see the changes in trends of errors or other significant events of the input source. Kibana works in sync with Elasticsearch and Logstash which together forms the so called ELK stack. Print Page Previous Next Advertisements ”;
Kibana – Dev Tools
Kibana – Dev Tools ”; Previous Next We can use Dev Tools to upload data in Elasticsearch, without using Logstash. We can post, put, delete, search the data we want in Kibana using Dev Tools. To create new index in Kibana we can use following command in dev tools − Create Index USING PUT The command to create index is as shown here − PUT /usersdata?pretty Once you execute this, an empty index userdata is created. We are done with the index creation. Now will add the data in the index − Add Data to Index Using PUT You can add data to an index as follows − We will add one more record in usersdata index − So we have 2 records in usersdata index. Fetch Data from Index Using GET We can get the details of record 1 as follows − You can get all records as follows − Thus, we can get all the records from usersdata as shown above. Update data in Index using PUT To update the record, you can do as follows − We have changed the name from “Ervin Howell” to “Clementine Bauch”. Now we can get all records from the index and see the updated record as follows − Delete data from index using DELETE You can delete the record as shown here − Now if you see the total records we will have only one record − We can delete the index created as follows − Now if you check the indices available we will not have usersdata index in it as deleted the index. Print Page Previous Next Advertisements ”;
Kibana – Loading Sample Data
Kibana – Loading Sample Data ”; Previous Next We have seen how to upload data from logstash to elasticsearch. We will upload data using logstash and elasticsearch here. But about the data that has date, longitude and latitudefields which we need to use, we will learn in the upcoming chapters. We will also see how to upload data directly in Kibana, if we do not have a CSV file. In this chapter, we will cover following topics − Using Logstash upload data having date, longitude and latitude fields in Elasticsearch Using Dev tools to upload bulk data Using Logstash upload for data having fields in Elasticsearch We are going to use data in the form of CSV format and the same is taken from Kaggle.com which deals with data that you can use for an analysis. The data home medical visits to be used here is picked up from site Kaggle.com. The following are the fields available for the CSV file − [“Visit_Status”,”Time_Delay”,”City”,”City_id”,”Patient_Age”,”Zipcode”,”Latitude”,”Longitude”, “Pathology”,”Visiting_Date”,”Id_type”,”Id_personal”,”Number_Home_Visits”,”Is_Patient_Minor”,”Geo_point”] The Home_visits.csv is as follows − The following is the conf file to be used with logstash − input { file { path => “C:/kibanaproject/home_visits.csv” start_position => “beginning” sincedb_path => “NUL” } } filter { csv { separator => “,” columns => [“Visit_Status”,”Time_Delay”,”City”,”City_id”,”Patient_Age”, “Zipcode”,”Latitude”,”Longitude”,”Pathology”,”Visiting_Date”, “Id_type”,”Id_personal”,”Number_Home_Visits”,”Is_Patient_Minor”,”Geo_point”] } date { match => [“Visiting_Date”,”dd-MM-YYYY HH:mm”] target => “Visiting_Date” } mutate {convert => [“Number_Home_Visits”, “integer”]} mutate {convert => [“City_id”, “integer”]} mutate {convert => [“Id_personal”, “integer”]} mutate {convert => [“Id_type”, “integer”]} mutate {convert => [“Zipcode”, “integer”]} mutate {convert => [“Patient_Age”, “integer”]} mutate { convert => { “Longitude” => “float” } convert => { “Latitude” => “float” } } mutate { rename => { “Longitude” => “[location][lon]” “Latitude” => “[location][lat]” } } } output { elasticsearch { hosts => [“localhost:9200”] index => “medicalvisits-%{+dd.MM.YYYY}” } stdout {codec => json_lines } } By default, logstash considers everything to be uploaded in elasticsearch as string. Incase your CSV file has date field you need to do following to get the date format. For date field − date { match => [“Visiting_Date”,”dd-MM-YYYY HH:mm”] target => “Visiting_Date” } In-case of geo location, elasticsearch understands the same as − “location”: { “lat”:41.565505000000044, “lon”: 2.2349995750000695 } So we need to make sure we have Longitude and Latitude in the format elasticsearch needs it. So first we need to convert longitude and latitude to float and later rename it so that it is available as part of location json object with lat and lon. The code for the same is shown here − mutate { convert => { “Longitude” => “float” } convert => { “Latitude” => “float” } } mutate { rename => { “Longitude” => “[location][lon]” “Latitude” => “[location][lat]” } } For converting fields to integers, use the following code − mutate {convert => [“Number_Home_Visits”, “integer”]} mutate {convert => [“City_id”, “integer”]} mutate {convert => [“Id_personal”, “integer”]} mutate {convert => [“Id_type”, “integer”]} mutate {convert => [“Zipcode”, “integer”]} mutate {convert => [“Patient_Age”, “integer”]} Once the fields are taken care, run the following command to upload the data in elasticsearch − Go inside Logstash bin directory and run the following command. logstash -f logstash_homevisists.conf Once done you should see the index mentioned in logstash conf file in elasticsearch as shown below − We can now create index pattern on above index uploaded and use it further for creating visualization. Using Dev Tools to Upload Bulk Data We are going to use Dev Tools from Kibana UI. Dev Tools is helpful to upload data in Elasticsearch, without using Logstash. We can post, put, delete, search the data we want in Kibana using Dev Tools. In this section, we will try to load sample data in Kibana itself. We can use it to practice with the sample data and play around with Kibana features to get a good understanding of Kibana. Let us take the json data from the following url and upload the same in Kibana. Similarly, you can try any sample json data to be loaded inside Kibana. Before we start to upload the sample data, we need to have the json data with indices to be used in elasticsearch. When we upload it using logstash, logstash takes care to add the indices and the user does not have to bother about the indices which are required by elasticsearch. Normal Json Data [ {“type”:”act”,”line_id”:1,”play_name”:”Henry IV”, “speech_number”:””,”line_number”:””,”speaker”:””,”text_entry”:”ACT I”}, {“type”:”scene”,”line_id”:2,”play_name”:”Henry IV”, “speech_number”:””,”line_number”:””,”speaker”:””,”text_entry”:”SCENE I.London. The palace.”}, {“type”:”line”,”line_id”:3,”play_name”:”Henry IV”, “speech_number”:””,”line_number”:””,”speaker”:””,”text_entry”: “Enter KING HENRY, LORD JOHN OF LANCASTER, the EARL of WESTMORELAND, SIR WALTER BLUNT, and others”} ] The json code to used with Kibana has to be with indexed as follows − {“index”:{“_index”:”shakespeare”,”_id”:0}} {“type”:”act”,”line_id”:1,”play_name”:”Henry IV”, “speech_number”:””,”line_number”:””,”speaker”:””,”text_entry”:”ACT I”} {“index”:{“_index”:”shakespeare”,”_id”:1}} {“type”:”scene”,”line_id”:2,”play_name”:”Henry IV”, “speech_number”:””,”line_number”:””,”speaker”:””, “text_entry”:”SCENE I. London. The palace.”} {“index”:{“_index”:”shakespeare”,”_id”:2}} {“type”:”line”,”line_id”:3,”play_name”:”Henry IV”, “speech_number”:””,”line_number”:””,”speaker”:””,”text_entry”: “Enter KING HENRY, LORD JOHN OF LANCASTER, the EARL of WESTMORELAND, SIR WALTER BLUNT, and others”} Note that there is an additional data that goes in the jsonfile −{“index”:{“_index”:”nameofindex”,”_id”:key}}. To convert any sample json file compatible with elasticsearch, here we have a small code in php which will output the json file given to the format which elasticsearch wants − PHP Code <?php $myfile = fopen(“todo.json”, “r”) or die(“Unable to open file!”); // your json file here $alldata = fread($myfile,filesize(“todo.json”)); fclose($myfile); $farray = json_decode($alldata); $afinalarray = []; $index_name = “todo”; $i=0; $myfile1 = fopen(“todonewfile.json”, “w”) or die(“Unable to open file!”); // writes a new file to be used in kibana dev tool foreach ($farray as $a => $value) { $_index = json_decode(”{“index”: {“_index”: “”.$index_name.””, “_id”: “”.$i.””}}”); fwrite($myfile1, json_encode($_index)); fwrite($myfile1, “n”); fwrite($myfile1, json_encode($value)); fwrite($myfile1, “n”); $i++; } ?> We have taken the todo json file from https://jsonplaceholder.typicode.com/todos and use php code to convert to the format we need to upload in Kibana. To load the sample data, open the dev tools tab as shown below − We are now going to use the console as shown above. We will take the json data which we got after running it through php code. The command to be used in dev tools to upload the json data is − POST _bulk Note that the
Kibana – Discover
Kibana – Discover ”; Previous Next This chapter discusses the Discover Tab in Kibana UI. We will learn in detail about the following concepts − Index without date field Index with date field Index without date field Select Discover on the left side menu as shown below − On the right side, it displays the details of the data available in countriesdata- 28.12.2018 index we created in previous chapter. On the top left corner, it shows the total number of records available − We can get the details of the data inside the index (countriesdata-28.12.2018) in this tab. On the top left corner in screen shown above, we can see Buttons like New, Save, Open, Share ,Inspect and Auto-refresh. If you click Auto-refresh, it will display the screen as shown below − You can set the auto-refresh interval by clicking on the seconds, minutes or hour from above. Kibana will auto-refresh the screen and get fresh data after every interval timer you set. The data from index:countriesdata-28.12.2018 is displayed as shown below − All the fields along with the data are shown row wise. Click the arrow to expand the row and it will give you details in Table format or JSON format JSON Format There is a button on the left side called View single document. If you click it, it will display the row or the data present in the row inside the page as shown below − Though we are getting all the data details here, it is difficult to go through each of them. Now let us try to get the data in tabular format. One way to expand one of the row and click the toggle column option available across each field is shown below − Click on Toggle column in table option available for each and you will notice the data being shown in table format − Here, we have selected fields Country, Area, Region and Population. Collapse the expanded row and you should see all the data in tabular format now. The fields we selected are displayed on the left side of the screen as shown below − Observe that there are 2 options − Selected fields and Available fields. The fields we have selected to show in tabular format are a part of selected fields. In case you want to remove any field you can do so by clicking the remove button which will be seen across the field name in selected field option. Once removed, the field will be available inside the Available fields where you can add back by clicking the add button which will be shown across the field you want. You can also use this method to get your data in tabular format by choosing the required fields from Available fields. We have a search option available in Discover, which we can use to search for data inside the index. Let us try examples related to search option here − Suppose you want to search for country India, you can do as follows − You can type your search details and click the Update button. If you want to search for countries starting with Aus, you can do so as follows − Click Update to see the results Here, we have two countries starting with Aus*. The search field has a Options button as shown above. When a user clicks it, it displays a toggle button which when ON helps in writing the search query. Turn on query features and type the field name in search, it will display the options available for that field. For example, Country field is a string and it displays following options for the string field − Similarly, Area is a Number field and it displays following options for Number field − You can try out different combination and filter the data as per your choice in Discover field. The data inside the Discover tab can be saved using the Save button, so that you can use it for future purpose. To save the data inside discover click on the save button on top right corner as shown below − Give title to your search and click Confirm Save to save it. Once saved, next time you visit the Discover tab, you can click the Open button on the top right corner to get the saved titles as shown below − You can also share the data with others using the Share button available on top right corner. If you click it, you can find sharing options as shown below − You can share it using CSV Reports or in the form of Permalinks. The option available onclick on CSV Reports are − Click Generate CSV to get the report to be shared with others. The option available onclick of Permalinks are as follows − The Snapshot option will give a Kibana link which will display data available in the search currently. The Saved object option will give a Kibana link which will display the recent data available in your search. Snapshot − http://localhost:5601/goto/309a983483fccd423950cfb708fabfa5 Saved Object :http://localhost:5601/app/kibana#/discover/40bd89d0-10b1-11e9-9876-4f3d759b471e?_g=() You can work with Discover tab and search options available and the result obtained can be saved and shared with others. Index with Date Field Go to Discover tab and select index:medicalvisits-26.01.2019 It has displayed the message − “No results match your search criteria”, for the last 15 minutes on the index we have selected. The index has data for years 2015,2016,2017 and 2018. Change the time range as shown below − Click Absolute tab. Select the date From − 1st Jan 2017 and To − 31st Dec2017 as we will analyze data for year 2017. Click the Go button to add the timerange. It will display you the data and bar chart as follows − This is the monthly data for the year 2017 − Since we also have the time stored along with date, we can filter the data on hours and minutes too. The figure shown above displays the
Working With Coordinate Map
Kibana – Working With Coordinate Map ”; Previous Next Coordinate maps in Kibana will show you the geographic area and mark the area with circles based on aggregation you specify. Create Index for Coordinate Map The Bucket aggregation used for coordinate map is geohash aggregation. For this type of aggregation, your index which you are going to use should have a field of type geo point. The geo point is combination of latitude and longitude. We will create an index using Kibana dev tools and add bulk data to it. We will add mapping and add the geo_point type that we need. The data that we are going to use is shown here − {“index”:{“_id”:1}} {“location”: “2.089330000000046,41.47367000000008”, “city”: “SantCugat”} {“index”:{“_id”:2}} {“location”: “2.2947825000000677,41.601800991000076”, “city”: “Granollers”} {“index”:{“_id”:3}} {“location”: “2.1105957495300474,41.5496295760424”, “city”: “Sabadell”} {“index”:{“_id”:4}} {“location”: “2.132605678083895,41.5370461908878”, “city”: “Barbera”} {“index”:{“_id”:5}} {“location”: “2.151270020052683,41.497779918345415”, “city”: “Cerdanyola”} {“index”:{“_id”:6}} {“location”: “2.1364609496220606,41.371303520399344”, “city”: “Barcelona”} {“index”:{“_id”:7}} {“location”: “2.0819450306711165,41.385491966414705”, “city”: “Sant Just Desvern”} {“index”:{“_id”:8}} {“location”: “2.00532082278266,41.542294286427385”, “city”: “Rubi”} {“index”:{“_id”:9}} {“location”: “1.9560805366930398,41.56142635214226”, “city”: “Viladecavalls”} {“index”:{“_id”:10}} {“location”: “2.09205348251486,41.39327140161001”, “city”: “Esplugas de Llobregat”} Now, run the following commands in Kibana Dev Tools as shown below − PUT /cities { “mappings”: { “_doc”: { “properties”: { “location”: { “type”: “geo_point” } } } } } POST /cities/_city/_bulk?refresh {“index”:{“_id”:1}} {“location”: “2.089330000000046,41.47367000000008”, “city”: “SantCugat”} {“index”:{“_id”:2}} {“location”: “2.2947825000000677,41.601800991000076”, “city”: “Granollers”} {“index”:{“_id”:3}} {“location”: “2.1105957495300474,41.5496295760424”, “city”: “Sabadell”} {“index”:{“_id”:4}} {“location”: “2.132605678083895,41.5370461908878”, “city”: “Barbera”} {“index”:{“_id”:5}} {“location”: “2.151270020052683,41.497779918345415”, “city”: “Cerdanyola”} {“index”:{“_id”:6}} {“location”: “2.1364609496220606,41.371303520399344”, “city”: “Barcelona”} {“index”:{“_id”:7}} {“location”: “2.0819450306711165,41.385491966414705”, “city”: “Sant Just Desvern”} {“index”:{“_id”:8}} {“location”: “2.00532082278266,41.542294286427385”, “city”: “Rubi”} {“index”:{“_id”:9}} {“location”: “1.9560805366930398,41.56142635214226”, “city”: “Viladecavalls”} {“index”:{“_id”:10}} {“location”: “2.09205348251486,41.3s9327140161001”, “city”: “Esplugas de Llobregat”} Now, run the above commands in Kibana dev tools − The above will create index name cities of type _doc and the field location is of type geo_point. Now let’s add data to the index: cities − We are done creating index name cites with data. Now let us Create index pattern for cities using Management tab. The details of fields inside cities index are shown here − We can see that location is of type geo_point. We can now use it to create visualization. Getting Started with Coordinate Maps Go to Visualization and select coordinate maps. Select the index pattern cities and configure the Aggregation metric and bucket as shown below − If you click on Analyze button, you can see the following screen − Based on the longitude and latitude, the circles are plotted on the map as shown above. Print Page Previous Next Advertisements ”;