Plotly – Package Structure

Plotly – Package Structure ”; Previous Next Plotly Python package has three main modules which are given below − plotly.plotly plotly.graph_objs plotly.tools The plotly.plotly module contains functions that require a response from Plotly”s servers. Functions in this module are interface between your local machine and Plotly. The plotly.graph_objs module is the most important module that contains all of the class definitions for the objects that make up the plots you see. Following graph objects are defined − Figure, Data, ayout, Different graph traces like Scatter, Box, Histogram etc. All graph objects are dictionary- and list-like objects used to generate and/or modify every feature of a Plotly plot. The plotly.tools module contains many helpful functions facilitating and enhancing the Plotly experience. Functions for subplot generation, embedding Plotly plots in IPython notebooks, saving and retrieving your credentials are defined in this module. A plot is represented by Figure object which represents Figure class defined in plotly.graph_objs module. It’s constructor needs following parameters − import plotly.graph_objs as go fig = go.Figure(data, layout, frames) The data parameter is a list object in Python. It is a list of all the traces that you wish to plot. A trace is just the name we give to a collection of data which is to be plotted. A trace object is named according to how you want the data displayed on the plotting surface. Plotly provides number of trace objects such as scatter, bar, pie, heatmap etc. and each is returned by respective functions in graph_objs functions. For example: go.scatter() returns a scatter trace. import numpy as np import math #needed for definition of pi xpoints=np.arange(0, math.pi*2, 0.05) ypoints=np.sin(xpoints) trace0 = go.Scatter( x = xpoints, y = ypoints ) data = [trace0] The layout parameter defines the appearance of the plot, and plot features which are unrelated to the data. So we will be able to change things like the title, axis titles, annotations, legends, spacing, font and even draw shapes on top of your plot. layout = go.Layout(title = “Sine wave”, xaxis = {”title”:”angle”}, yaxis = {”title”:”sine”}) A plot can have plot title as well as axis title. It also may have annotations to indicate other descriptions. Finally, there is a Figure object created by go.Figure() function. It is a dictionary-like object that contains both the data object and the layout object. The figure object is eventually plotted. py.iplot(fig) Print Page Previous Next Advertisements ”;

Plotting Inline with Jupyter Notebook

Plotly – Plotting Inline with Jupyter Notebook ”; Previous Next In this chapter, we will study how to do inline plotting with the Jupyter Notebook. In order to display the plot inside the notebook, you need to initiate plotly’s notebook mode as follows − from plotly.offline import init_notebook_mode init_notebook_mode(connected = True) Keep rest of the script as it is and run the notebook cell by pressing Shift+Enter. Graph will be displayed offline inside the notebook itself. import plotly plotly.tools.set_credentials_file(username = ”lathkar”, api_key = ”************”) from plotly.offline import iplot, init_notebook_mode init_notebook_mode(connected = True) import plotly import plotly.graph_objs as go import numpy as np import math #needed for definition of pi xpoints = np.arange(0, math.pi*2, 0.05) ypoints = np.sin(xpoints) trace0 = go.Scatter( x = xpoints, y = ypoints ) data = [trace0] plotly.offline.iplot({ “data”: data,”layout”: go.Layout(title=”Sine wave”)}) Jupyter notebook output will be as shown below − The plot output shows a tool bar at top right. It contains buttons for download as png, zoom in and out, box and lasso, select and hover. Print Page Previous Next Advertisements ”;

Plotly – Online & Offline Plotting

Plotly – Online and Offline Plotting ”; Previous Next The following chapter deals with the settings for the online and offline plotting. Let us first study the settings for online plotting. Settings for online plotting Data and graph of online plot are save in your plot.ly account. Online plots are generated by two methods both of which create a unique url for the plot and save it in your Plotly account. py.plot() − returns the unique url and optionally open the url. py.iplot() − when working in a Jupyter Notebook to display the plot in the notebook. We shall now display simple plot of angle in radians vs. its sine value. First, obtain ndarray object of angles between 0 and 2π using arange() function from numpy library. This ndarray object serves as values on x axis of the graph. Corresponding sine values of angles in x which has to be displayed on y axis are obtained by following statements − import numpy as np import math #needed for definition of pi xpoints = np.arange(0, math.pi*2, 0.05) ypoints = np.sin(xpoints) Next, create a scatter trace using Scatter() function in graph_objs module. trace0 = go.Scatter( x = xpoints, y = ypoints ) data = [trace0] Use above list object as argument to plot() function. py.plot(data, filename = ”Sine wave”, auto_open=True) Save following script as plotly1.py import plotly plotly.tools.set_credentials_file(username=”lathkar”, api_key=”********************”) import plotly.plotly as py import plotly.graph_objs as go import numpy as np import math #needed for definition of pi xpoints = np.arange(0, math.pi*2, 0.05) ypoints = np.sin(xpoints) trace0 = go.Scatter( x = xpoints, y = ypoints ) data = [trace0] py.plot(data, filename = ”Sine wave”, auto_open=True) Execute the above mentioned script from command line. Resultant plot will be displayed in the browser at specified URL as stated below. $ python plotly1.py High five! You successfully sent some data to your account on plotly. View your plot in your browser at https://plot.ly/~lathkar/0 Just above the displayed graph, you will find tabs Plot, Data, Python & Rand Forking history. Currently, Plot tab is selected. The Data tab shows a grid containing x and y data points. From Python & R tab, you can view code corresponding to current plot in Python, R, JSON, Matlab etc. Following snapshot shows Python code for the plot as generated above − Setting for Offline Plotting Plotly allows you to generate graphs offline and save them in local machine. The plotly.offline.plot() function creates a standalone HTML that is saved locally and opened inside your web browser. Use plotly.offline.iplot() when working offline in a Jupyter Notebook to display the plot in the notebook. Note − Plotly”s version 1.9.4+ is needed for offline plotting. Change plot() function statement in the script and run. A HTML file named temp-plot.html will be created locally and opened in web browser. plotly.offline.plot( { “data”: data,”layout”: go.Layout(title = “hello world”)}, auto_open = True) Print Page Previous Next Advertisements ”;

Plotly – Home

Plotly Tutorial PDF Version Quick Guide Resources Job Search Discussion This tutorial is about Canada based technical computing company Plotly which is also known for its URL. Here, you will learn about how to develop data analytics and visualization tools. Moreover, this tutorial describes the features of Plotly’s Python graphing library to make interactive and publication-ready graphs for both online and offline viewing. Audience The tutorial is aptly designed for all those who are passionate about learning online graphing, analytics, and statistics tools. Furthermore, it is for those individuals who have keen interest in understanding how Plotly helps in providing tools for scientific graphing libraries of the computer programming languages such as Python, R, MATLAB, Perl, Julia, Arduino, and REST. Prerequisites To work with Plotly, you need to create an account on the official website. The details about how to create an account and get login is discussed in the tutorial. If you are novice to knowledge about data analytics, visualization tools or any of the programming languages like Python, R, MATLAB, Arduino, REST, Julia and Perl, we suggest you to go through tutorials related to these before proceeding with this tutorial. Print Page Previous Next Advertisements ”;

Plotly – Subplots & Inset Plots

Plotly – Subplots and Inset Plots ”; Previous Next Here, we will understand the concept of subplots and inset plots in Plotly. Making Subplots Sometimes it is helpful to compare different views of data side by side. This supports the concept of subplots. It offers make_subplots() function in plotly.tools module. The function returns a Figure object. The following statement creates two subplots in one row. fig = tools.make_subplots(rows = 1, cols = 2) We can now add two different traces (the exp and log traces in example above) to the figure. fig.append_trace(trace1, 1, 1) fig.append_trace(trace2, 1, 2) The Layout of figure is further configured by specifying title, width, height, etc. using update() method. fig[”layout”].update(height = 600, width = 800s, title = ”subplots”) Here”s the complete script − from plotly import tools import plotly.plotly as py import plotly.graph_objs as go from plotly.offline import iplot, init_notebook_mode init_notebook_mode(connected = True) import numpy as np x = np.arange(1,11) y1 = np.exp(x) y2 = np.log(x) trace1 = go.Scatter( x = x, y = y1, name = ”exp” ) trace2 = go.Scatter( x = x, y = y2, name = ”log” ) fig = tools.make_subplots(rows = 1, cols = 2) fig.append_trace(trace1, 1, 1) fig.append_trace(trace2, 1, 2) fig[”layout”].update(height = 600, width = 800, title = ”subplot”) iplot(fig) This is the format of your plot grid: [ (1,1) x1,y1 ] [ (1,2) x2,y2 ] Inset Plots To display a subplot as inset, we need to configure its trace object. First the xaxis and yaxis properties of inset trace to ‘x2’ and ‘y2’ respectively. Following statement puts ‘log’ trace in inset. trace2 = go.Scatter( x = x, y = y2, xaxis = ”x2”, yaxis = ”y2”, name = ”log” ) Secondly, configure Layout object where the location of x and y axes of inset is defined by domain property that specifies is position with respective to major axis. xaxis2=dict( domain = [0.1, 0.5], anchor = ”y2” ), yaxis2 = dict( domain = [0.5, 0.9], anchor = ”x2” ) Complete script to display log trace in inset and exp trace on main axis is given below − trace1 = go.Scatter( x = x, y = y1, name = ”exp” ) trace2 = go.Scatter( x = x, y = y2, xaxis = ”x2”, yaxis = ”y2”, name = ”log” ) data = [trace1, trace2] layout = go.Layout( yaxis = dict(showline = True), xaxis2 = dict( domain = [0.1, 0.5], anchor = ”y2” ), yaxis2 = dict( showline = True, domain = [0.5, 0.9], anchor = ”x2” ) ) fig = go.Figure(data=data, layout=layout) iplot(fig) The output is mentioned below − Print Page Previous Next Advertisements ”;

Highcharts – Overview

Highcharts – Overview ”; Previous Next Highcharts is a pure JavaScript based charting library meant to enhance web applications by adding interactive charting capability. It supports a wide range of charts. Charts are drawn using SVG in standard browsers like Chrome, Firefox, Safari, Internet Explorer(IE). In legacy IE 6, VML is used to draw the graphics. Features of Highcharts Library Let us now discuss a few important features of the Highcharts Library. Compatability − Works seemlessly on all major browsers and mobile platforms like android and iOS. Multitouch Support − Supports multitouch on touch screen based platforms like android and iOS.Ideal for iPhone/iPad and android based smart phones/ tablets. Free to Use − Open source and is free to use for non-commercial purpose. Lightweight − highcharts.js core library with size nearly 35KB, is an extremely lightweight library. Simple Configurations − Uses json to define various configurations of the charts and very easy to learn and use. Dynamic − Allows to modify chart even after chart generation. Multiple axes − Not restricted to x, y axis. Supports multiple axis on the charts. Configurable tooltips − Tooltip comes when a user hovers over any point on a chart. Highcharts provides tooltip inbuilt formatter or callback formatter to control the tooltip programmatically. DateTime support − Handle date time specially. Provides numerous inbuilt controls over date wise categories. Export − Export chart to PDF/ PNG/ JPG / SVG format by enabling export feature. Print − Print chart using web page. Zoomablity − Supports zooming chart to view data more precisely. External data − Supports loading data dynamically from server. Provides control over data using callback functions. Text Rotation − Supports rotation of labels in any direction. Supported Chart Types Highcharts library provides the following types of charts − Sr.No. Chart Type & Description 1 Line Charts Used to draw line/spline based charts. 2 Area Charts Used to draw area wise charts. 3 Pie Charts Used to draw pie charts. 4 Scatter Charts Used to draw scattered charts. 5 Bubble Charts Used to draw bubble based charts. 6 Dynamic Charts Used to draw dynamic charts where user can modify charts. 7 Combinations Used to draw combinations of variety of charts. 8 3D Charts Used to draw 3D charts. 9 Angular Gauges Used to draw speedometer type charts. 10 Heat Maps Used to draw heat maps. 11 Tree Maps Used to draw tree maps. In our subsequent chapters, we will discuss each type of above mentioned charts in details with examples. Licence Highcharts is open source and is free to use for non-commercial purpose. In order to use Highcharts in commercial projects, follow the link − License and Pricing Print Page Previous Next Advertisements ”;

Hive – Quick Guide

Hive – Quick Guide ”; Previous Next Hive – Introduction The term ‘Big Data’ is used for collections of large datasets that include huge volume, high velocity, and a variety of data that is increasing day by day. Using traditional data management systems, it is difficult to process Big Data. Therefore, the Apache Software Foundation introduced a framework called Hadoop to solve Big Data management and processing challenges. Hadoop Hadoop is an open-source framework to store and process Big Data in a distributed environment. It contains two modules, one is MapReduce and another is Hadoop Distributed File System (HDFS). MapReduce: It is a parallel programming model for processing large amounts of structured, semi-structured, and unstructured data on large clusters of commodity hardware. HDFS:Hadoop Distributed File System is a part of Hadoop framework, used to store and process the datasets. It provides a fault-tolerant file system to run on commodity hardware. The Hadoop ecosystem contains different sub-projects (tools) such as Sqoop, Pig, and Hive that are used to help Hadoop modules. Sqoop: It is used to import and export data to and fro between HDFS and RDBMS. Pig: It is a procedural language platform used to develop a script for MapReduce operations. Hive: It is a platform used to develop SQL type scripts to do MapReduce operations. Note: There are various ways to execute MapReduce operations: The traditional approach using Java MapReduce program for structured, semi-structured, and unstructured data. The scripting approach for MapReduce to process structured and semi structured data using Pig. The Hive Query Language (HiveQL or HQL) for MapReduce to process structured data using Hive. What is Hive Hive is a data warehouse infrastructure tool to process structured data in Hadoop. It resides on top of Hadoop to summarize Big Data, and makes querying and analyzing easy. Initially Hive was developed by Facebook, later the Apache Software Foundation took it up and developed it further as an open source under the name Apache Hive. It is used by different companies. For example, Amazon uses it in Amazon Elastic MapReduce. Hive is not A relational database A design for OnLine Transaction Processing (OLTP) A language for real-time queries and row-level updates Features of Hive It stores schema in a database and processed data into HDFS. It is designed for OLAP. It provides SQL type language for querying called HiveQL or HQL. It is familiar, fast, scalable, and extensible. Architecture of Hive The following component diagram depicts the architecture of Hive: This component diagram contains different units. The following table describes each unit: Unit Name Operation User Interface Hive is a data warehouse infrastructure software that can create interaction between user and HDFS. The user interfaces that Hive supports are Hive Web UI, Hive command line, and Hive HD Insight (In Windows server). Meta Store Hive chooses respective database servers to store the schema or Metadata of tables, databases, columns in a table, their data types, and HDFS mapping. HiveQL Process Engine HiveQL is similar to SQL for querying on schema info on the Metastore. It is one of the replacements of traditional approach for MapReduce program. Instead of writing MapReduce program in Java, we can write a query for MapReduce job and process it. Execution Engine The conjunction part of HiveQL process Engine and MapReduce is Hive Execution Engine. Execution engine processes the query and generates results as same as MapReduce results. It uses the flavor of MapReduce. HDFS or HBASE Hadoop distributed file system or HBASE are the data storage techniques to store data into file system. Working of Hive The following diagram depicts the workflow between Hive and Hadoop. The following table defines how Hive interacts with Hadoop framework: Step No. Operation 1 Execute Query The Hive interface such as Command Line or Web UI sends query to Driver (any database driver such as JDBC, ODBC, etc.) to execute. 2 Get Plan The driver takes the help of query compiler that parses the query to check the syntax and query plan or the requirement of query. 3 Get Metadata The compiler sends metadata request to Metastore (any database). 4 Send Metadata Metastore sends metadata as a response to the compiler. 5 Send Plan The compiler checks the requirement and resends the plan to the driver. Up to here, the parsing and compiling of a query is complete. 6 Execute Plan The driver sends the execute plan to the execution engine. 7 Execute Job Internally, the process of execution job is a MapReduce job. The execution engine sends the job to JobTracker, which is in Name node and it assigns this job to TaskTracker, which is in Data node. Here, the query executes MapReduce job. 7.1 Metadata Ops Meanwhile in execution, the execution engine can execute metadata operations with Metastore. 8 Fetch Result The execution engine receives the results from Data nodes. 9 Send Results The execution engine sends those resultant values to the driver. 10 Send Results The driver sends the results to Hive Interfaces. Hive – Installation All Hadoop sub-projects such as Hive, Pig, and HBase support Linux operating system. Therefore, you need to install any Linux flavored OS. The following simple steps are executed for Hive installation: Step 1: Verifying JAVA Installation Java must be installed on your system before installing Hive. Let us verify java installation using the following command: $ java –version If Java is already installed on your system, you get to see the following response: java version “1.7.0_71” Java(TM) SE Runtime Environment (build 1.7.0_71-b13) Java HotSpot(TM) Client VM (build 25.0-b02, mixed mode) If java is not installed in your system, then follow the steps given below for installing java. Installing Java Step I: Download java (JDK <latest version> – X64.tar.gz) by visiting the following link http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html. Then jdk-7u71-linux-x64.tar.gz will be downloaded onto your system. Step II: Generally you will find the downloaded java file in the Downloads folder. Verify it and extract the jdk-7u71-linux-x64.gz file using the following commands. $ cd Downloads/ $ ls jdk-7u71-linux-x64.gz $ tar

Hive – Questions and Answers

Hive Questions and Answers ”; Previous Next Hive Questions and Answers has been designed with a special intention of helping students and professionals preparing for various Certification Exams and Job Interviews. This section provides a useful collection of sample Interview Questions and Multiple Choice Questions (MCQs) and their answers with appropriate explanations. SN Question/Answers Type 1 Hive Interview Questions This section provides a huge collection of Hive Interview Questions with their answers hidden in a box to challenge you to have a go at them before discovering the correct answer. 2 Hive Online Quiz This section provides a great collection of Hive Multiple Choice Questions (MCQs) on a single page along with their correct answers and explanation. If you select the right option, it turns green; else red. 3 Hive Online Test If you are preparing to appear for a Java and Hive Framework related certification exam, then this section is a must for you. This section simulates a real online test along with a given timer which challenges you to complete the test within a given time-frame. Finally you can check your overall test score and how you fared among millions of other candidates who attended this online test. 4 Hive Mock Test This section provides various mock tests that you can download at your local machine and solve offline. Every mock test is supplied with a mock test key to let you verify the final score and grade yourself. Print Page Previous Next Advertisements ”;

Hive – Useful Resources

Hive – Useful Resources ”; Previous Next The following resources contain additional information on Hive. Please use them to get more in-depth knowledge on this topic. Useful Video Courses Big Data Analytics Using Hive In Hadoop 21 Lectures 2 hours Mukund Kumar Mishra More Detail Advance Big Data Analytics using Hive & Sqoop Best Seller 51 Lectures 4 hours Navdeep Kaur More Detail Apache Hive for Data Engineers (Hands On) Most Popular 92 Lectures 6 hours Bigdata Engineer More Detail Apache Hive Interview Question and Answer (100+ FAQ) 109 Lectures 2 hours Bigdata Engineer More Detail Flutter Course – Master Flutter From Scratch and Create Platform Independent Apps 55 Lectures 9 hours Code Studio More Detail Learn Hive – Course for Beginners 22 Lectures 2.5 hours Corporate Bridge Consultancy Private Limited More Detail Print Page Previous Next Advertisements ”;

HiveQL – Select Joins

HiveQL – Select-Joins ”; Previous Next JOIN is a clause that is used for combining specific fields from two tables by using values common to each one. It is used to combine records from two or more tables in the database. Syntax join_table: table_reference JOIN table_factor [join_condition] | table_reference {LEFT|RIGHT|FULL} [OUTER] JOIN table_reference join_condition | table_reference LEFT SEMI JOIN table_reference join_condition | table_reference CROSS JOIN table_reference [join_condition] Example We will use the following two tables in this chapter. Consider the following table named CUSTOMERS.. +—-+———-+—–+———–+———-+ | ID | NAME | AGE | ADDRESS | SALARY | +—-+———-+—–+———–+———-+ | 1 | Ramesh | 32 | Ahmedabad | 2000.00 | | 2 | Khilan | 25 | Delhi | 1500.00 | | 3 | kaushik | 23 | Kota | 2000.00 | | 4 | Chaitali | 25 | Mumbai | 6500.00 | | 5 | Hardik | 27 | Bhopal | 8500.00 | | 6 | Komal | 22 | MP | 4500.00 | | 7 | Muffy | 24 | Indore | 10000.00 | +—-+———-+—–+———–+———-+ Consider another table ORDERS as follows: +—–+———————+————-+——–+ |OID | DATE | CUSTOMER_ID | AMOUNT | +—–+———————+————-+——–+ | 102 | 2009-10-08 00:00:00 | 3 | 3000 | | 100 | 2009-10-08 00:00:00 | 3 | 1500 | | 101 | 2009-11-20 00:00:00 | 2 | 1560 | | 103 | 2008-05-20 00:00:00 | 4 | 2060 | +—–+———————+————-+——–+ There are different types of joins given as follows: JOIN LEFT OUTER JOIN RIGHT OUTER JOIN FULL OUTER JOIN JOIN JOIN clause is used to combine and retrieve the records from multiple tables. JOIN is same as OUTER JOIN in SQL. A JOIN condition is to be raised using the primary keys and foreign keys of the tables. The following query executes JOIN on the CUSTOMER and ORDER tables, and retrieves the records: hive> SELECT c.ID, c.NAME, c.AGE, o.AMOUNT FROM CUSTOMERS c JOIN ORDERS o ON (c.ID = o.CUSTOMER_ID); On successful execution of the query, you get to see the following response: +—-+———-+—–+——–+ | ID | NAME | AGE | AMOUNT | +—-+———-+—–+——–+ | 3 | kaushik | 23 | 3000 | | 3 | kaushik | 23 | 1500 | | 2 | Khilan | 25 | 1560 | | 4 | Chaitali | 25 | 2060 | +—-+———-+—–+——–+ LEFT OUTER JOIN The HiveQL LEFT OUTER JOIN returns all the rows from the left table, even if there are no matches in the right table. This means, if the ON clause matches 0 (zero) records in the right table, the JOIN still returns a row in the result, but with NULL in each column from the right table. A LEFT JOIN returns all the values from the left table, plus the matched values from the right table, or NULL in case of no matching JOIN predicate. The following query demonstrates LEFT OUTER JOIN between CUSTOMER and ORDER tables: hive> SELECT c.ID, c.NAME, o.AMOUNT, o.DATE FROM CUSTOMERS c LEFT OUTER JOIN ORDERS o ON (c.ID = o.CUSTOMER_ID); On successful execution of the query, you get to see the following response: +—-+———-+——–+———————+ | ID | NAME | AMOUNT | DATE | +—-+———-+——–+———————+ | 1 | Ramesh | NULL | NULL | | 2 | Khilan | 1560 | 2009-11-20 00:00:00 | | 3 | kaushik | 3000 | 2009-10-08 00:00:00 | | 3 | kaushik | 1500 | 2009-10-08 00:00:00 | | 4 | Chaitali | 2060 | 2008-05-20 00:00:00 | | 5 | Hardik | NULL | NULL | | 6 | Komal | NULL | NULL | | 7 | Muffy | NULL | NULL | +—-+———-+——–+———————+ RIGHT OUTER JOIN The HiveQL RIGHT OUTER JOIN returns all the rows from the right table, even if there are no matches in the left table. If the ON clause matches 0 (zero) records in the left table, the JOIN still returns a row in the result, but with NULL in each column from the left table. A RIGHT JOIN returns all the values from the right table, plus the matched values from the left table, or NULL in case of no matching join predicate. The following query demonstrates RIGHT OUTER JOIN between the CUSTOMER and ORDER tables. notranslate”> hive> SELECT c.ID, c.NAME, o.AMOUNT, o.DATE FROM CUSTOMERS c RIGHT OUTER JOIN ORDERS o ON (c.ID = o.CUSTOMER_ID); On successful execution of the query, you get to see the following response: +——+———-+——–+———————+ | ID | NAME | AMOUNT | DATE | +——+———-+——–+———————+ | 3 | kaushik | 3000 | 2009-10-08 00:00:00 | | 3 | kaushik | 1500 | 2009-10-08 00:00:00 | | 2 | Khilan | 1560 | 2009-11-20 00:00:00 | | 4 | Chaitali | 2060 | 2008-05-20 00:00:00 | +——+———-+——–+———————+ FULL OUTER JOIN The HiveQL FULL OUTER JOIN combines the records of both the left and the right outer tables that fulfil the JOIN condition. The joined table contains either all the records from both the tables, or fills in NULL values for missing matches on either side. The following query demonstrates FULL OUTER JOIN between CUSTOMER and ORDER tables: hive> SELECT c.ID, c.NAME, o.AMOUNT, o.DATE FROM CUSTOMERS c FULL OUTER JOIN ORDERS o ON (c.ID = o.CUSTOMER_ID); On successful execution of the query, you get to see the following response: +——+———-+——–+———————+ | ID | NAME | AMOUNT | DATE | +——+———-+——–+———————+ | 1 | Ramesh | NULL | NULL | | 2 | Khilan | 1560 | 2009-11-20 00:00:00 | | 3 | kaushik | 3000 | 2009-10-08 00:00:00 | | 3 | kaushik | 1500 | 2009-10-08 00:00:00 | | 4 | Chaitali | 2060 | 2008-05-20 00:00:00 | | 5 | Hardik | NULL | NULL | | 6 | Komal | NULL | NULL | | 7 | Muffy | NULL | NULL | | 3 | kaushik | 3000 | 2009-10-08 00:00:00 | | 3 | kaushik | 1500 | 2009-10-08 00:00:00 | | 2 | Khilan | 1560 | 2009-11-20 00:00:00 | | 4 | Chaitali | 2060