Preparing Data

Logistic Regression in Python – Preparing Data ”; Previous Next For creating the classifier, we must prepare the data in a format that is asked by the classifier building module. We prepare the data by doing One Hot Encoding. Encoding Data We will discuss shortly what we mean by encoding data. First, let us run the code. Run the following command in the code window. In [10]: # creating one hot encoding of the categorical columns. data = pd.get_dummies(df, columns =[”job”, ”marital”, ”default”, ”housing”, ”loan”, ”poutcome”]) As the comment says, the above statement will create the one hot encoding of the data. Let us see what has it created? Examine the created data called “data” by printing the head records in the database. In [11]: data.head() You will see the following output − To understand the above data, we will list out the column names by running the data.columns command as shown below − In [12]: data.columns Out[12]: Index([”y”, ”job_admin.”, ”job_blue-collar”, ”job_entrepreneur”, ”job_housemaid”, ”job_management”, ”job_retired”, ”job_self-employed”, ”job_services”, ”job_student”, ”job_technician”, ”job_unemployed”, ”job_unknown”, ”marital_divorced”, ”marital_married”, ”marital_single”, ”marital_unknown”, ”default_no”, ”default_unknown”, ”default_yes”, ”housing_no”, ”housing_unknown”, ”housing_yes”, ”loan_no”, ”loan_unknown”, ”loan_yes”, ”poutcome_failure”, ”poutcome_nonexistent”, ”poutcome_success”], dtype=”object”) Now, we will explain how the one hot encoding is done by the get_dummies command. The first column in the newly generated database is “y” field which indicates whether this client has subscribed to a TD or not. Now, let us look at the columns which are encoded. The first encoded column is “job”. In the database, you will find that the “job” column has many possible values such as “admin”, “blue-collar”, “entrepreneur”, and so on. For each possible value, we have a new column created in the database, with the column name appended as a prefix. Thus, we have columns called “job_admin”, “job_blue-collar”, and so on. For each encoded field in our original database, you will find a list of columns added in the created database with all possible values that the column takes in the original database. Carefully examine the list of columns to understand how the data is mapped to a new database. Understanding Data Mapping To understand the generated data, let us print out the entire data using the data command. The partial output after running the command is shown below. In [13]: data The above screen shows the first twelve rows. If you scroll down further, you would see that the mapping is done for all the rows. A partial screen output further down the database is shown here for your quick reference. To understand the mapped data, let us examine the first row. It says that this customer has not subscribed to TD as indicated by the value in the “y” field. It also indicates that this customer is a “blue-collar” customer. Scrolling down horizontally, it will tell you that he has a “housing” and has taken no “loan”. After this one hot encoding, we need some more data processing before we can start building our model. Dropping the “unknown” If we examine the columns in the mapped database, you will find the presence of few columns ending with “unknown”. For example, examine the column at index 12 with the following command shown in the screenshot − In [14]: data.columns[12] Out[14]: ”job_unknown” This indicates the job for the specified customer is unknown. Obviously, there is no point in including such columns in our analysis and model building. Thus, all columns with the “unknown” value should be dropped. This is done with the following command − In [15]: data.drop(data.columns[[12, 16, 18, 21, 24]], axis=1, inplace=True) Ensure that you specify the correct column numbers. In case of a doubt, you can examine the column name anytime by specifying its index in the columns command as described earlier. After dropping the undesired columns, you can examine the final list of columns as shown in the output below − In [16]: data.columns Out[16]: Index([”y”, ”job_admin.”, ”job_blue-collar”, ”job_entrepreneur”, ”job_housemaid”, ”job_management”, ”job_retired”, ”job_self-employed”, ”job_services”, ”job_student”, ”job_technician”, ”job_unemployed”, ”marital_divorced”, ”marital_married”, ”marital_single”, ”default_no”, ”default_yes”, ”housing_no”, ”housing_yes”, ”loan_no”, ”loan_yes”, ”poutcome_failure”, ”poutcome_nonexistent”, ”poutcome_success”], dtype=”object”) At this point, our data is ready for model building. Print Page Previous Next Advertisements ”;

Getting Data

Logistic Regression in Python – Getting Data ”; Previous Next The steps involved in getting data for performing logistic regression in Python are discussed in detail in this chapter. Downloading Dataset If you have not already downloaded the UCI dataset mentioned earlier, download it now from here. Click on the Data Folder. You will see the following screen − Download the bank.zip file by clicking on the given link. The zip file contains the following files − We will use the bank.csv file for our model development. The bank-names.txt file contains the description of the database that you are going to need later. The bank-full.csv contains a much larger dataset that you may use for more advanced developments. Here we have included the bank.csv file in the downloadable source zip. This file contains the comma-delimited fields. We have also made a few modifications in the file. It is recommended that you use the file included in the project source zip for your learning. Loading Data To load the data from the csv file that you copied just now, type the following statement and run the code. In [2]: df = pd.read_csv(”bank.csv”, header=0) You will also be able to examine the loaded data by running the following code statement − IN [3]: df.head() Once the command is run, you will see the following output − Basically, it has printed the first five rows of the loaded data. Examine the 21 columns present. We will be using only few columns from these for our model development. Next, we need to clean the data. The data may contain some rows with NaN. To eliminate such rows, use the following command − IN [4]: df = df.dropna() Fortunately, the bank.csv does not contain any rows with NaN, so this step is not truly required in our case. However, in general it is difficult to discover such rows in a huge database. So it is always safer to run the above statement to clean the data. Note − You can easily examine the data size at any point of time by using the following statement − IN [5]: print (df.shape) (41188, 21) The number of rows and columns would be printed in the output as shown in the second line above. Next thing to do is to examine the suitability of each column for the model that we are trying to build. Print Page Previous Next Advertisements ”;

Quick Guide

Logistic Regression in Python – Quick Guide ”; Previous Next Logistic Regression in Python – Introduction Logistic Regression is a statistical method of classification of objects. This chapter will give an introduction to logistic regression with the help of some examples. Classification To understand logistic regression, you should know what classification means. Let us consider the following examples to understand this better − A doctor classifies the tumor as malignant or benign. A bank transaction may be fraudulent or genuine. For many years, humans have been performing such tasks – albeit they are error-prone. The question is can we train machines to do these tasks for us with a better accuracy? One such example of machine doing the classification is the email Client on your machine that classifies every incoming mail as “spam” or “not spam” and it does it with a fairly large accuracy. The statistical technique of logistic regression has been successfully applied in email client. In this case, we have trained our machine to solve a classification problem. Logistic Regression is just one part of machine learning used for solving this kind of binary classification problem. There are several other machine learning techniques that are already developed and are in practice for solving other kinds of problems. If you have noted, in all the above examples, the outcome of the predication has only two values – Yes or No. We call these as classes – so as to say we say that our classifier classifies the objects in two classes. In technical terms, we can say that the outcome or target variable is dichotomous in nature. There are other classification problems in which the output may be classified into more than two classes. For example, given a basket full of fruits, you are asked to separate fruits of different kinds. Now, the basket may contain Oranges, Apples, Mangoes, and so on. So when you separate out the fruits, you separate them out in more than two classes. This is a multivariate classification problem. Logistic Regression in Python – Case Study Consider that a bank approaches you to develop a machine learning application that will help them in identifying the potential clients who would open a Term Deposit (also called Fixed Deposit by some banks) with them. The bank regularly conducts a survey by means of telephonic calls or web forms to collect information about the potential clients. The survey is general in nature and is conducted over a very large audience out of which many may not be interested in dealing with this bank itself. Out of the rest, only a few may be interested in opening a Term Deposit. Others may be interested in other facilities offered by the bank. So the survey is not necessarily conducted for identifying the customers opening TDs. Your task is to identify all those customers with high probability of opening TD from the humongous survey data that the bank is going to share with you. Fortunately, one such kind of data is publicly available for those aspiring to develop machine learning models. This data was prepared by some students at UC Irvine with external funding. The database is available as a part of UCI Machine Learning Repository and is widely used by students, educators, and researchers all over the world. The data can be downloaded from here. In the next chapters, let us now perform the application development using the same data. Setting Up a Project In this chapter, we will understand the process involved in setting up a project to perform logistic regression in Python, in detail. Installing Jupyter We will be using Jupyter – one of the most widely used platforms for machine learning. If you do not have Jupyter installed on your machine, download it from here. For installation, you can follow the instructions on their site to install the platform. As the site suggests, you may prefer to use Anaconda Distribution which comes along with Python and many commonly used Python packages for scientific computing and data science. This will alleviate the need for installing these packages individually. After the successful installation of Jupyter, start a new project, your screen at this stage would look like the following ready to accept your code. Now, change the name of the project from Untitled1 to “Logistic Regression” by clicking the title name and editing it. First, we will be importing several Python packages that we will need in our code. Importing Python Packages For this purpose, type or cut-and-paste the following code in the code editor − In [1]: # import statements import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split Your Notebook should look like the following at this stage − Run the code by clicking on the Run button. If no errors are generated, you have successfully installed Jupyter and are now ready for the rest of the development. The first three import statements import pandas, numpy and matplotlib.pyplot packages in our project. The next three statements import the specified modules from sklearn. Our next task is to download the data required for our project. We will learn this in the next chapter. Logistic Regression in Python – Getting Data The steps involved in getting data for performing logistic regression in Python are discussed in detail in this chapter. Downloading Dataset If you have not already downloaded the UCI dataset mentioned earlier, download it now from here. Click on the Data Folder. You will see the following screen − Download the bank.zip file by clicking on the given link. The zip file contains the following files − We will use the bank.csv file for our model development. The bank-names.txt file contains the description of the database that you are going to need later. The bank-full.csv contains a much larger dataset that you may use for more advanced developments. Here we have included the bank.csv file in the

Setting up a Project

Setting Up a Project ”; Previous Next In this chapter, we will understand the process involved in setting up a project to perform logistic regression in Python, in detail. Installing Jupyter We will be using Jupyter – one of the most widely used platforms for machine learning. If you do not have Jupyter installed on your machine, download it from here. For installation, you can follow the instructions on their site to install the platform. As the site suggests, you may prefer to use Anaconda Distribution which comes along with Python and many commonly used Python packages for scientific computing and data science. This will alleviate the need for installing these packages individually. After the successful installation of Jupyter, start a new project, your screen at this stage would look like the following ready to accept your code. Now, change the name of the project from Untitled1 to “Logistic Regression” by clicking the title name and editing it. First, we will be importing several Python packages that we will need in our code. Importing Python Packages For this purpose, type or cut-and-paste the following code in the code editor − In [1]: # import statements import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split Your Notebook should look like the following at this stage − Run the code by clicking on the Run button. If no errors are generated, you have successfully installed Jupyter and are now ready for the rest of the development. The first three import statements import pandas, numpy and matplotlib.pyplot packages in our project. The next three statements import the specified modules from sklearn. Our next task is to download the data required for our project. We will learn this in the next chapter. Print Page Previous Next Advertisements ”;