Gradient Descent is a popular optimization algorithm that is used to minimize the cost function of a machine learning model. It works by iteratively adjusting the model parameters to minimize the difference between the predicted output and the actual output. The algorithm works by calculating the gradient of the cost function with respect to the model parameters and then adjusting the parameters in the opposite direction of the
gradient.
Stochastic Gradient Descent is a variant of Gradient Descent that updates the parameters for each training example instead of updating them after evaluating the entire dataset. This means that instead of using the entire dataset to calculate the gradient of the cost function, SGD only uses a single training example. This approach allows the algorithm to converge faster and requires less memory to store the data.
Working of Stochastic Gradient Descent Algorithm
Stochastic Gradient Descent works by randomly selecting a single training example from the dataset and using it to update the model parameters. This process is repeated for a fixed number of epochs, or until the model converges to a minimum of the cost function.
Here”s how the Stochastic Gradient Descent algorithm works −
-
Initialize the model parameters to random values.
-
For each epoch, randomly shuffle the training data.
-
For each training example −
-
Calculate the gradient of the cost function with respect to the model parameters.
-
Update the model parameters in the opposite direction of the gradient.
-
-
Repeat until convergence
The main difference between Stochastic Gradient Descent and regular Gradient Descent is the way that the gradient is calculated and the way that the model parameters are updated. In Stochastic Gradient Descent, the gradient is calculated using a single training example, while in Gradient Descent, the gradient is calculated using the entire dataset.
Implementation of Stochastic Gradient Descent in Python
Let”s look at an example of how to implement Stochastic Gradient Descent in Python. We will use the scikit-learn library to implement the algorithm on the Iris dataset which is a popular dataset used for classification tasks. In this example we will be predicting Iris flower species using its two features namely sepal width and sepal length −
Example
# Import required libraries import sklearn import numpy as np from sklearn import datasets from sklearn.linear_model import SGDClassifier # Loading Iris flower dataset iris = datasets.load_iris() X_data, y_data = iris.data, iris.target # Dividing the dataset into training and testing dataset from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler # Getting the Iris dataset with only the first two attributes X, y = X_data[:,:2], y_data # Split the dataset into a training and a testing set(20 percent) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1) # Standarize the features scaler = StandardScaler().fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # create the linear model SGDclassifier clfmodel_SGD = SGDClassifier(alpha=0.001, max_iter=200) # Train the classifier using fit() function clfmodel_SGD.fit(X_train, y_train) # Evaluate the result from sklearn import metrics y_train_pred = clfmodel_SGD.predict(X_train) print ("nThe Accuracy of SGD classifier is:", metrics.accuracy_score(y_train, y_train_pred)*100)
Output
When you run this code, it will produce the following output −
The Accuracy of SGD classifier is: 77.5