PyTorch – Convolutional Neural Network

PyTorch – Convolutional Neural Network ”; Previous Next Deep learning is a division of machine learning and is considered as a crucial step taken by researchers in recent decades. The examples of deep learning implementation include applications like image recognition and speech recognition. The two important types of deep neural networks are given below − Convolutional Neural Networks Recurrent Neural Networks. In this chapter, we will be focusing on the first type, i.e., Convolutional Neural Networks (CNN). Convolutional Neural Networks Convolutional Neural networks are designed to process data through multiple layers of arrays. This type of neural networks are used in applications like image recognition or face recognition. The primary difference between CNN and any other ordinary neural network is that CNN takes input as a two dimensional array and operates directly on the images rather than focusing on feature extraction which other neural networks focus on. The dominant approach of CNN includes solution for problems of recognition. Top companies like Google and Facebook have invested in research and development projects of recognition projects to get activities done with greater speed. Every convolutional neural network includes three basic ideas − Local respective fields Convolution Pooling Let us understand each of these terminologies in detail. Local Respective Fields CNN utilize spatial correlations that exists within the input data. Each in the concurrent layers of neural networks connects of some input neurons. This specific region is called Local Receptive Field. It only focusses on hidden neurons. The hidden neuron will process the input data inside the mentioned field not realizing the changes outside the specific boundary. The diagram representation of generating local respective fields is mentioned below − Convolution In the above figure, we observe that each connection learns a weight of hidden neuron with an associated connection with movement from one layer to another. Here, individual neurons perform a shift from time to time. This process is called “convolution”. The mapping of connections from the input layer to the hidden feature map is defined as “shared weights” and bias included is called “shared bias”. Pooling Convolutional neural networks use pooling layers which are positioned immediately after CNN declaration. It takes the input from the user as a feature map which comes out convolutional networks and prepares a condensed feature map. Pooling layers help in creating layers with neurons of previous layers. Implementation of PyTorch Following steps are used to create a Convolutional Neural Network using PyTorch. Step 1 Import the necessary packages for creating a simple neural network. from torch.autograd import Variable import torch.nn.functional as F Step 2 Create a class with batch representation of convolutional neural network. Our batch shape for input x is with dimension of (3, 32, 32). class SimpleCNN(torch.nn.Module): def __init__(self): super(SimpleCNN, self).__init__() #Input channels = 3, output channels = 18 self.conv1 = torch.nn.Conv2d(3, 18, kernel_size = 3, stride = 1, padding = 1) self.pool = torch.nn.MaxPool2d(kernel_size = 2, stride = 2, padding = 0) #4608 input features, 64 output features (see sizing flow below) self.fc1 = torch.nn.Linear(18 * 16 * 16, 64) #64 input features, 10 output features for our 10 defined classes self.fc2 = torch.nn.Linear(64, 10) Step 3 Compute the activation of the first convolution size changes from (3, 32, 32) to (18, 32, 32). Size of the dimension changes from (18, 32, 32) to (18, 16, 16). Reshape data dimension of the input layer of the neural net due to which size changes from (18, 16, 16) to (1, 4608). Recall that -1 infers this dimension from the other given dimension. def forward(self, x): x = F.relu(self.conv1(x)) x = self.pool(x) x = x.view(-1, 18 * 16 *16) x = F.relu(self.fc1(x)) #Computes the second fully connected layer (activation applied later) #Size changes from (1, 64) to (1, 10) x = self.fc2(x) return(x) Print Page Previous Next Advertisements ”;

PyTorch – Datasets

PyTorch – Datasets ”; Previous Next In this chapter, we will focus more on torchvision.datasets and its various types. PyTorch includes following dataset loaders − MNIST COCO (Captioning and Detection) Dataset includes majority of two types of functions given below − Transform − a function that takes in an image and returns a modified version of standard stuff. These can be composed together with transforms. Target_transform − a function that takes the target and transforms it. For example, takes in the caption string and returns a tensor of world indices. MNIST The following is the sample code for MNIST dataset − dset.MNIST(root, train = TRUE, transform = NONE, target_transform = None, download = FALSE) The parameters are as follows − root − root directory of the dataset where processed data exist. train − True = Training set, False = Test set download − True = downloads the dataset from the internet and puts it in the root. COCO This requires the COCO API to be installed. The following example is used to demonstrate the COCO implementation of dataset using PyTorch − import torchvision.dataset as dset import torchvision.transforms as transforms cap = dset.CocoCaptions(root = ‘ dir where images are’, annFile = ’json annotation file’, transform = transforms.ToTensor()) print(‘Number of samples: ‘, len(cap)) print(target) The output achieved is as follows − Number of samples: 82783 Image Size: (3L, 427L, 640L) Print Page Previous Next Advertisements ”;

Training a Convent from Scratch

PyTorch – Training a Convent from Scratch ”; Previous Next In this chapter, we will focus on creating a convent from scratch. This infers in creating the respective convent or sample neural network with torch. Step 1 Create a necessary class with respective parameters. The parameters include weights with random value. class Neural_Network(nn.Module): def __init__(self, ): super(Neural_Network, self).__init__() self.inputSize = 2 self.outputSize = 1 self.hiddenSize = 3 # weights self.W1 = torch.randn(self.inputSize, self.hiddenSize) # 3 X 2 tensor self.W2 = torch.randn(self.hiddenSize, self.outputSize) # 3 X 1 tensor Step 2 Create a feed forward pattern of function with sigmoid functions. def forward(self, X): self.z = torch.matmul(X, self.W1) # 3 X 3 “.dot” does not broadcast in PyTorch self.z2 = self.sigmoid(self.z) # activation function self.z3 = torch.matmul(self.z2, self.W2) o = self.sigmoid(self.z3) # final activation function return o def sigmoid(self, s): return 1 / (1 + torch.exp(-s)) def sigmoidPrime(self, s): # derivative of sigmoid return s * (1 – s) def backward(self, X, y, o): self.o_error = y – o # error in output self.o_delta = self.o_error * self.sigmoidPrime(o) # derivative of sig to error self.z2_error = torch.matmul(self.o_delta, torch.t(self.W2)) self.z2_delta = self.z2_error * self.sigmoidPrime(self.z2) self.W1 + = torch.matmul(torch.t(X), self.z2_delta) self.W2 + = torch.matmul(torch.t(self.z2), self.o_delta) Step 3 Create a training and prediction model as mentioned below − def train(self, X, y): # forward + backward pass for training o = self.forward(X) self.backward(X, y, o) def saveWeights(self, model): # Implement PyTorch internal storage functions torch.save(model, “NN”) # you can reload model with all the weights and so forth with: # torch.load(“NN”) def predict(self): print (“Predicted data based on trained weights: “) print (“Input (scaled): n” + str(xPredicted)) print (“Output: n” + str(self.forward(xPredicted))) Print Page Previous Next Advertisements ”;

PyTorch – Introduction to Convents

PyTorch – Introduction to Convents ”; Previous Next Convents is all about building the CNN model from scratch. The network architecture will contain a combination of following steps − Conv2d MaxPool2d Rectified Linear Unit View Linear Layer Training the Model Training the model is the same process like image classification problems. The following code snippet completes the procedure of a training model on the provided dataset − def fit(epoch,model,data_loader,phase = ”training”,volatile = False): if phase == ”training”: model.train() if phase == ”training”: model.train() if phase == ”validation”: model.eval() volatile=True running_loss = 0.0 running_correct = 0 for batch_idx , (data,target) in enumerate(data_loader): if is_cuda: data,target = data.cuda(),target.cuda() data , target = Variable(data,volatile),Variable(target) if phase == ”training”: optimizer.zero_grad() output = model(data) loss = F.nll_loss(output,target) running_loss + = F.nll_loss(output,target,size_average = False).data[0] preds = output.data.max(dim = 1,keepdim = True)[1] running_correct + = preds.eq(target.data.view_as(preds)).cpu().sum() if phase == ”training”: loss.backward() optimizer.step() loss = running_loss/len(data_loader.dataset) accuracy = 100. * running_correct/len(data_loader.dataset) print(f”{phase} loss is {loss:{5}.{2}} and {phase} accuracy is {running_correct}/{len(data_loader.dataset)}{accuracy:{return loss,accuracy}}) The method includes different logic for training and validation. There are two primary reasons for using different modes − In train mode, dropout removes a percentage of values, which should not happen in the validation or testing phase. For training mode, we calculate gradients and change the model”s parameters value, but back propagation is not required during the testing or validation phases. Print Page Previous Next Advertisements ”;

Implementing First Neural Network

PyTorch – Implementing First Neural Network ”; Previous Next PyTorch includes a special feature of creating and implementing neural networks. In this chapter, we will create a simple neural network with one hidden layer developing a single output unit. We shall use following steps to implement the first neural network using PyTorch − Step 1 First, we need to import the PyTorch library using the below command − import torch import torch.nn as nn Step 2 Define all the layers and the batch size to start executing the neural network as shown below − # Defining input size, hidden layer size, output size and batch size respectively n_in, n_h, n_out, batch_size = 10, 5, 1, 10 Step 3 As neural network includes a combination of input data to get the respective output data, we will be following the same procedure as given below − # Create dummy input and target tensors (data) x = torch.randn(batch_size, n_in) y = torch.tensor([[1.0], [0.0], [0.0], [1.0], [1.0], [1.0], [0.0], [0.0], [1.0], [1.0]]) Step 4 Create a sequential model with the help of in-built functions. Using the below lines of code, create a sequential model − # Create a model model = nn.Sequential(nn.Linear(n_in, n_h), nn.ReLU(), nn.Linear(n_h, n_out), nn.Sigmoid()) Step 5 Construct the loss function with the help of Gradient Descent optimizer as shown below − Construct the loss function criterion = torch.nn.MSELoss() # Construct the optimizer (Stochastic Gradient Descent in this case) optimizer = torch.optim.SGD(model.parameters(), lr = 0.01) Step 6 Implement the gradient descent model with the iterating loop with the given lines of code − # Gradient Descent for epoch in range(50): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = criterion(y_pred, y) print(”epoch: ”, epoch,” loss: ”, loss.item()) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() # perform a backward pass (backpropagation) loss.backward() # Update the parameters optimizer.step() Step 7 The output generated is as follows − epoch: 0 loss: 0.2545787990093231 epoch: 1 loss: 0.2545052170753479 epoch: 2 loss: 0.254431813955307 epoch: 3 loss: 0.25435858964920044 epoch: 4 loss: 0.2542854845523834 epoch: 5 loss: 0.25421255826950073 epoch: 6 loss: 0.25413978099823 epoch: 7 loss: 0.25406715273857117 epoch: 8 loss: 0.2539947032928467 epoch: 9 loss: 0.25392240285873413 epoch: 10 loss: 0.25385022163391113 epoch: 11 loss: 0.25377824902534485 epoch: 12 loss: 0.2537063956260681 epoch: 13 loss: 0.2536346912384033 epoch: 14 loss: 0.25356316566467285 epoch: 15 loss: 0.25349172949790955 epoch: 16 loss: 0.25342053174972534 epoch: 17 loss: 0.2533493936061859 epoch: 18 loss: 0.2532784342765808 epoch: 19 loss: 0.25320762395858765 epoch: 20 loss: 0.2531369626522064 epoch: 21 loss: 0.25306645035743713 epoch: 22 loss: 0.252996027469635 epoch: 23 loss: 0.2529257833957672 epoch: 24 loss: 0.25285571813583374 epoch: 25 loss: 0.25278574228286743 epoch: 26 loss: 0.25271597504615784 epoch: 27 loss: 0.25264623761177063 epoch: 28 loss: 0.25257670879364014 epoch: 29 loss: 0.2525072991847992 epoch: 30 loss: 0.2524380087852478 epoch: 31 loss: 0.2523689270019531 epoch: 32 loss: 0.25229987502098083 epoch: 33 loss: 0.25223103165626526 epoch: 34 loss: 0.25216227769851685 epoch: 35 loss: 0.252093642950058 epoch: 36 loss: 0.25202515721321106 epoch: 37 loss: 0.2519568204879761 epoch: 38 loss: 0.251888632774353 epoch: 39 loss: 0.25182053446769714 epoch: 40 loss: 0.2517525553703308 epoch: 41 loss: 0.2516847252845764 epoch: 42 loss: 0.2516169846057892 epoch: 43 loss: 0.2515493929386139 epoch: 44 loss: 0.25148195028305054 epoch: 45 loss: 0.25141456723213196 epoch: 46 loss: 0.2513473629951477 epoch: 47 loss: 0.2512802183628082 epoch: 48 loss: 0.2512132525444031 epoch: 49 loss: 0.2511464059352875 Print Page Previous Next Advertisements ”;

PyTorch – Useful Resources

PyTorch – Useful Resources ”; Previous Next The following resources contain additional information on PyTorch. Please use them to get more in-depth knowledge on this. Useful Video Courses Deeplearning: Convolutional Neural Network for Developers 11 Lectures 2.5 hours Alexsandro Souza More Detail Deep Learning Training Course 81 Lectures 10.5 hours Corporate Bridge Consultancy Private Limited More Detail Practical Deep Learning for Image Segmentation with Python and PyTorch Most Popular 34 Lectures 3 hours Mazhar Hussain More Detail Mastering Recurrent Neural Networks, Theory and Practice in Python 96 Lectures 13.5 hours AI Sciences More Detail Reinforcement Learning & Deep RL Python(Theory & Projects) 156 Lectures 14 hours AI Sciences More Detail Deep Learning ANN Artificial Neural Networks with Python Featured 105 Lectures 10 hours AI Sciences More Detail Print Page Previous Next Advertisements ”;

Machine Learning vs. Deep Learning

PyTorch – Machine Learning vs. Deep Learning ”; Previous Next In this chapter, we will discuss the major difference between Machine and Deep learning concepts. Amount of Data Machine learning works with different amounts of data and is mainly used for small amounts of data. Deep learning on the other hand works efficiently if the amount of data increases rapidly. The following diagram depicts the working of machine learning and deep learning with respect to amount of data − Hardware Dependencies Deep learning algorithms are designed to heavily depend on high end machines on a contrary to traditional machine learning algorithms. Deep learning algorithms perform a large amount of matrix multiplication operations which requires a huge hardware support. Feature Engineering Feature engineering is the process of putting domain knowledge into specified features to reduce the complexity of data and make patterns which are visible to learning algorithms. For instance, traditional machine learning patterns focusses on pixels and other attributes needed for feature engineering process. Deep learning algorithms focusses on high level features from data. It reduces the task of developing new feature extractor for every new problem. Print Page Previous Next Advertisements ”;

PyTorch – Loading Data

PyTorch – Loading Data ”; Previous Next PyTorch includes a package called torchvision which is used to load and prepare the dataset. It includes two basic functions namely Dataset and DataLoader which helps in transformation and loading of dataset. Dataset Dataset is used to read and transform a datapoint from the given dataset. The basic syntax to implement is mentioned below − trainset = torchvision.datasets.CIFAR10(root = ”./data”, train = True, download = True, transform = transform) DataLoader is used to shuffle and batch data. It can be used to load the data in parallel with multiprocessing workers. trainloader = torch.utils.data.DataLoader(trainset, batch_size = 4, shuffle = True, num_workers = 2) Example: Loading CSV File We use the Python package Panda to load the csv file. The original file has the following format: (image name, 68 landmarks – each landmark has a x, y coordinates). landmarks_frame = pd.read_csv(”faces/face_landmarks.csv”) n = 65 img_name = landmarks_frame.iloc[n, 0] landmarks = landmarks_frame.iloc[n, 1:].as_matrix() landmarks = landmarks.astype(”float”).reshape(-1, 2) Print Page Previous Next Advertisements ”;

PyTorch – Installation

PyTorch – Installation ”; Previous Next PyTorch is a popular deep learning framework. In this tutorial, we consider “Windows 10” as our operating system. The steps for a successful environmental setup are as follows − Step 1 The following link includes a list of packages which includes suitable packages for PyTorch. https://drive.google.com/drive/folders/0B-X0-FlSGfCYdTNldW02UGl4MXM All you need to do is download the respective packages and install it as shown in the following screenshots − Step 2 It involves verifying the installation of PyTorch framework using Anaconda Framework. Following command is used to verify the same − conda list “Conda list” shows the list of frameworks which is installed. The highlighted part shows that PyTorch has been successfully installed in our system. Print Page Previous Next Advertisements ”;

PyTorch – Home

PyTorch Tutorial PDF Version Quick Guide Resources Job Search Discussion PyTorch is an open source machine learning library for Python and is completely based on Torch. It is primarily used for applications such as natural language processing. PyTorch is developed by Facebook”s artificial-intelligence research group along with Uber”s “Pyro” software for the concept of in-built probabilistic programming. Audience This tutorial has been prepared for python developers who focus on research and development with machinelearning algorithms along with natural language processing system. The aim of this tutorial is to completely describe all concepts of PyTorch and realworld examples of the same. Prerequisites Before proceeding with this tutorial, you need knowledge of Python and Anaconda framework (commands used in Anaconda). Having knowledge of artificial intelligence concepts will be an added advantage. Print Page Previous Next Advertisements ”;