Neural Networks to Functional Blocks

PyTorch – Neural Networks to Functional Blocks ”; Previous Next Training a deep learning algorithm involves the following steps − Building a data pipeline Building a network architecture Evaluating the architecture using a loss function Optimizing the network architecture weights using an optimization algorithm Training a specific deep learning algorithm is the exact requirement of converting a neural network to functional blocks as shown below − With respect to the above diagram, any deep learning algorithm involves getting the input data, building the respective architecture which includes a bunch of layers embedded in them. If you observe the above diagram, the accuracy is evaluated using a loss function with respect to optimization of the weights of neural network. Print Page Previous Next Advertisements ”;

PyTorch – Terminologies

PyTorch – Terminologies ”; Previous Next In this chapter, we will discuss some of the most commonly used terms in PyTorch. PyTorch NumPy A PyTorch tensor is identical to a NumPy array. A tensor is an n-dimensional array and with respect to PyTorch, it provides many functions to operate on these tensors. PyTorch tensors usually utilize GPUs to accelerate their numeric computations. These tensors which are created in PyTorch can be used to fit a two-layer network to random data. The user can manually implement the forward and backward passes through the network. Variables and Autograd When using autograd, the forward pass of your network will define a computational graph − nodes in the graph will be Tensors, and edges will be functions that produce output Tensors from input Tensors. PyTorch Tensors can be created as variable objects where a variable represents a node in computational graph. Dynamic Graphs Static graphs are nice because user can optimize the graph up front. If programmers are re-using same graph over and over, then this potentially costly up-front optimization can be maintained as the same graph is rerun over and over. The major difference between them is that Tensor Flow’s computational graphs are static and PyTorch uses dynamic computational graphs. Optim Package The optim package in PyTorch abstracts the idea of an optimization algorithm which is implemented in many ways and provides illustrations of commonly used optimization algorithms. This can be called within the import statement. Multiprocessing Multiprocessing supports the same operations, so that all tensors work on multiple processors. The queue will have their data moved into shared memory and will only send a handle to another process. Print Page Previous Next Advertisements ”;

PyTorch – Recurrent Neural Network

PyTorch – Recurrent Neural Network ”; Previous Next Recurrent neural networks is one type of deep learning-oriented algorithm which follows a sequential approach. In neural networks, we always assume that each input and output is independent of all other layers. These type of neural networks are called recurrent because they perform mathematical computations in a sequential manner completing one task after another. The diagram below specifies the complete approach and working of recurrent neural networks − In the above figure, c1, c2, c3 and x1 are considered as inputs which includes some hidden input values namely h1, h2 and h3 delivering the respective output of o1. We will now focus on implementing PyTorch to create a sine wave with the help of recurrent neural networks. During training, we will follow a training approach to our model with one data point at a time. The input sequence x consists of 20 data points, and the target sequence is considered to be same as the input sequence. Step 1 Import the necessary packages for implementing recurrent neural networks using the below code − import torch from torch.autograd import Variable import numpy as np import pylab as pl import torch.nn.init as init Step 2 We will set the model hyper parameters with the size of input layer set to 7. There will be 6 context neurons and 1 input neuron for creating target sequence. dtype = torch.FloatTensor input_size, hidden_size, output_size = 7, 6, 1 epochs = 300 seq_length = 20 lr = 0.1 data_time_steps = np.linspace(2, 10, seq_length + 1) data = np.sin(data_time_steps) data.resize((seq_length + 1, 1)) x = Variable(torch.Tensor(data[:-1]).type(dtype), requires_grad=False) y = Variable(torch.Tensor(data[1:]).type(dtype), requires_grad=False) We will generate training data, where x is the input data sequence and y is required target sequence. Step 3 Weights are initialized in the recurrent neural network using normal distribution with zero mean. W1 will represent acceptance of input variables and w2 will represent the output which is generated as shown below − w1 = torch.FloatTensor(input_size, hidden_size).type(dtype) init.normal(w1, 0.0, 0.4) w1 = Variable(w1, requires_grad = True) w2 = torch.FloatTensor(hidden_size, output_size).type(dtype) init.normal(w2, 0.0, 0.3) w2 = Variable(w2, requires_grad = True) Step 4 Now, it is important to create a function for feed forward which uniquely defines the neural network. def forward(input, context_state, w1, w2): xh = torch.cat((input, context_state), 1) context_state = torch.tanh(xh.mm(w1)) out = context_state.mm(w2) return (out, context_state) Step 5 The next step is to start training procedure of recurrent neural network’s sine wave implementation. The outer loop iterates over each loop and the inner loop iterates through the element of sequence. Here, we will also compute Mean Square Error (MSE) which helps in the prediction of continuous variables. for i in range(epochs): total_loss = 0 context_state = Variable(torch.zeros((1, hidden_size)).type(dtype), requires_grad = True) for j in range(x.size(0)): input = x[j:(j+1)] target = y[j:(j+1)] (pred, context_state) = forward(input, context_state, w1, w2) loss = (pred – target).pow(2).sum()/2 total_loss += loss loss.backward() w1.data -= lr * w1.grad.data w2.data -= lr * w2.grad.data w1.grad.data.zero_() w2.grad.data.zero_() context_state = Variable(context_state.data) if i % 10 == 0: print(“Epoch: {} loss {}”.format(i, total_loss.data[0])) context_state = Variable(torch.zeros((1, hidden_size)).type(dtype), requires_grad = False) predictions = [] for i in range(x.size(0)): input = x[i:i+1] (pred, context_state) = forward(input, context_state, w1, w2) context_state = context_state predictions.append(pred.data.numpy().ravel()[0]) Step 6 Now, it is time to plot the sine wave as the way it is needed. pl.scatter(data_time_steps[:-1], x.data.numpy(), s = 90, label = “Actual”) pl.scatter(data_time_steps[1:], predictions, label = “Predicted”) pl.legend() pl.show() Output The output for the above process is as follows − Print Page Previous Next Advertisements ”;

PyTorch – Convolutional Neural Network

PyTorch – Convolutional Neural Network ”; Previous Next Deep learning is a division of machine learning and is considered as a crucial step taken by researchers in recent decades. The examples of deep learning implementation include applications like image recognition and speech recognition. The two important types of deep neural networks are given below − Convolutional Neural Networks Recurrent Neural Networks. In this chapter, we will be focusing on the first type, i.e., Convolutional Neural Networks (CNN). Convolutional Neural Networks Convolutional Neural networks are designed to process data through multiple layers of arrays. This type of neural networks are used in applications like image recognition or face recognition. The primary difference between CNN and any other ordinary neural network is that CNN takes input as a two dimensional array and operates directly on the images rather than focusing on feature extraction which other neural networks focus on. The dominant approach of CNN includes solution for problems of recognition. Top companies like Google and Facebook have invested in research and development projects of recognition projects to get activities done with greater speed. Every convolutional neural network includes three basic ideas − Local respective fields Convolution Pooling Let us understand each of these terminologies in detail. Local Respective Fields CNN utilize spatial correlations that exists within the input data. Each in the concurrent layers of neural networks connects of some input neurons. This specific region is called Local Receptive Field. It only focusses on hidden neurons. The hidden neuron will process the input data inside the mentioned field not realizing the changes outside the specific boundary. The diagram representation of generating local respective fields is mentioned below − Convolution In the above figure, we observe that each connection learns a weight of hidden neuron with an associated connection with movement from one layer to another. Here, individual neurons perform a shift from time to time. This process is called “convolution”. The mapping of connections from the input layer to the hidden feature map is defined as “shared weights” and bias included is called “shared bias”. Pooling Convolutional neural networks use pooling layers which are positioned immediately after CNN declaration. It takes the input from the user as a feature map which comes out convolutional networks and prepares a condensed feature map. Pooling layers help in creating layers with neurons of previous layers. Implementation of PyTorch Following steps are used to create a Convolutional Neural Network using PyTorch. Step 1 Import the necessary packages for creating a simple neural network. from torch.autograd import Variable import torch.nn.functional as F Step 2 Create a class with batch representation of convolutional neural network. Our batch shape for input x is with dimension of (3, 32, 32). class SimpleCNN(torch.nn.Module): def __init__(self): super(SimpleCNN, self).__init__() #Input channels = 3, output channels = 18 self.conv1 = torch.nn.Conv2d(3, 18, kernel_size = 3, stride = 1, padding = 1) self.pool = torch.nn.MaxPool2d(kernel_size = 2, stride = 2, padding = 0) #4608 input features, 64 output features (see sizing flow below) self.fc1 = torch.nn.Linear(18 * 16 * 16, 64) #64 input features, 10 output features for our 10 defined classes self.fc2 = torch.nn.Linear(64, 10) Step 3 Compute the activation of the first convolution size changes from (3, 32, 32) to (18, 32, 32). Size of the dimension changes from (18, 32, 32) to (18, 16, 16). Reshape data dimension of the input layer of the neural net due to which size changes from (18, 16, 16) to (1, 4608). Recall that -1 infers this dimension from the other given dimension. def forward(self, x): x = F.relu(self.conv1(x)) x = self.pool(x) x = x.view(-1, 18 * 16 *16) x = F.relu(self.fc1(x)) #Computes the second fully connected layer (activation applied later) #Size changes from (1, 64) to (1, 10) x = self.fc2(x) return(x) Print Page Previous Next Advertisements ”;

PyTorch – Datasets

PyTorch – Datasets ”; Previous Next In this chapter, we will focus more on torchvision.datasets and its various types. PyTorch includes following dataset loaders − MNIST COCO (Captioning and Detection) Dataset includes majority of two types of functions given below − Transform − a function that takes in an image and returns a modified version of standard stuff. These can be composed together with transforms. Target_transform − a function that takes the target and transforms it. For example, takes in the caption string and returns a tensor of world indices. MNIST The following is the sample code for MNIST dataset − dset.MNIST(root, train = TRUE, transform = NONE, target_transform = None, download = FALSE) The parameters are as follows − root − root directory of the dataset where processed data exist. train − True = Training set, False = Test set download − True = downloads the dataset from the internet and puts it in the root. COCO This requires the COCO API to be installed. The following example is used to demonstrate the COCO implementation of dataset using PyTorch − import torchvision.dataset as dset import torchvision.transforms as transforms cap = dset.CocoCaptions(root = ‘ dir where images are’, annFile = ’json annotation file’, transform = transforms.ToTensor()) print(‘Number of samples: ‘, len(cap)) print(target) The output achieved is as follows − Number of samples: 82783 Image Size: (3L, 427L, 640L) Print Page Previous Next Advertisements ”;

Mathematical Building Blocks of Neural Networks

Mathematical Building Blocks of Neural Networks ”; Previous Next Mathematics is vital in any machine learning algorithm and includes various core concepts of mathematics to get the right algorithm designed in a specific way. The importance of mathematics topics for machine learning and data science is mentioned below − Now, let us focus on the major mathematical concepts of machine learning which is important from Natural Language Processing point of view − Vectors Vector is considered to be array of numbers which is either continuous or discrete and the space which consists of vectors is called as vector space. The space dimensions of vectors can be either finite or infinite but it has been observed that machine learning and data science problems deal with fixed length vectors. The vector representation is displayed as mentioned below − temp = torch.FloatTensor([23,24,24.5,26,27.2,23.0]) temp.size() Output – torch.Size([6]) In machine learning, we deal with multidimensional data. So vectors become very crucial and are considered as input features for any prediction problem statement. Scalars Scalars are termed to have zero dimensions containing only one value. When it comes to PyTorch, it does not include a special tensor with zero dimensions; hence the declaration will be made as follows − x = torch.rand(10) x.size() Output – torch.Size([10]) Matrices Most of the structured data is usually represented in the form of tables or a specific matrix. We will use a dataset called Boston House Prices, which is readily available in the Python scikit-learn machine learning library. boston_tensor = torch.from_numpy(boston.data) boston_tensor.size() Output: torch.Size([506, 13]) boston_tensor[:2] Output: Columns 0 to 7 0.0063 18.0000 2.3100 0.0000 0.5380 6.5750 65.2000 4.0900 0.0273 0.0000 7.0700 0.0000 0.4690 6.4210 78.9000 4.9671 Columns 8 to 12 1.0000 296.0000 15.3000 396.9000 4.9800 2.0000 242.0000 17.8000 396.9000 9.1400 Print Page Previous Next Advertisements ”;

PyTorch – Neural Network Basics

PyTorch – Neural Network Basics ”; Previous Next The main principle of neural network includes a collection of basic elements, i.e., artificial neuron or perceptron. It includes several basic inputs such as x1, x2….. xn which produces a binary output if the sum is greater than the activation potential. The schematic representation of sample neuron is mentioned below − The output generated can be considered as the weighted sum with activation potential or bias. $$Output=sum_jw_jx_j+Bias$$ The typical neural network architecture is described below − The layers between input and output are referred to as hidden layers, and the density and type of connections between layers is the configuration. For example, a fully connected configuration has all the neurons of layer L connected to those of L+1. For a more pronounced localization, we can connect only a local neighbourhood, say nine neurons, to the next layer. Figure 1-9 illustrates two hidden layers with dense connections. The various types of neural networks are as follows − Feedforward Neural Networks Feedforward neural networks include basic units of neural network family. The movement of data in this type of neural network is from the input layer to output layer, via present hidden layers. The output of one layer serves as the input layer with restrictions on any kind of loops in the network architecture. Recurrent Neural Networks Recurrent Neural Networks are when the data pattern changes consequently over a period. In RNN, same layer is applied to accept the input parameters and display output parameters in specified neural network. Neural networks can be constructed using the torch.nn package. It is a simple feed-forward network. It takes the input, feeds it through several layers one after the other, and then finally gives the output. With the help of PyTorch, we can use the following steps for typical training procedure for a neural network − Define the neural network that has some learnable parameters (or weights). Iterate over a dataset of inputs. Process input through the network. Compute the loss (how far is the output from being correct). Propagate gradients back into the network’s parameters. Update the weights of the network, typically using a simple update as given below rule: weight = weight -learning_rate * gradient Print Page Previous Next Advertisements ”;

PyTorch – Introduction

PyTorch – Introduction ”; Previous Next PyTorch is defined as an open source machine learning library for Python. It is used for applications such as natural language processing. It is initially developed by Facebook artificial-intelligence research group, and Uber’s Pyro software for probabilistic programming which is built on it. Originally, PyTorch was developed by Hugh Perkins as a Python wrapper for the LusJIT based on Torch framework. There are two PyTorch variants. PyTorch redesigns and implements Torch in Python while sharing the same core C libraries for the backend code. PyTorch developers tuned this back-end code to run Python efficiently. They also kept the GPU based hardware acceleration as well as the extensibility features that made Lua-based Torch. Features The major features of PyTorch are mentioned below − Easy Interface − PyTorch offers easy to use API; hence it is considered to be very simple to operate and runs on Python. The code execution in this framework is quite easy. Python usage − This library is considered to be Pythonic which smoothly integrates with the Python data science stack. Thus, it can leverage all the services and functionalities offered by the Python environment. Computational graphs − PyTorch provides an excellent platform which offers dynamic computational graphs. Thus a user can change them during runtime. This is highly useful when a developer has no idea of how much memory is required for creating a neural network model. PyTorch is known for having three levels of abstraction as given below − Tensor − Imperative n-dimensional array which runs on GPU. Variable − Node in computational graph. This stores data and gradient. Module − Neural network layer which will store state or learnable weights. Advantages of PyTorch The following are the advantages of PyTorch − It is easy to debug and understand the code. It includes many layers as Torch. It includes lot of loss functions. It can be considered as NumPy extension to GPUs. It allows building networks whose structure is dependent on computation itself. TensorFlow vs. PyTorch We shall look into the major differences between TensorFlow and PyTorch below − PyTorch TensorFlow PyTorch is closely related to the lua-based Torch framework which is actively used in Facebook. TensorFlow is developed by Google Brain and actively used at Google. PyTorch is relatively new compared to other competitive technologies. TensorFlow is not new and is considered as a to-go tool by many researchers and industry professionals. PyTorch includes everything in imperative and dynamic manner. TensorFlow includes static and dynamic graphs as a combination. Computation graph in PyTorch is defined during runtime. TensorFlow do not include any run time option. PyTorch includes deployment featured for mobile and embedded frameworks. TensorFlow works better for embedded frameworks. Print Page Previous Next Advertisements ”;

Machine Learning vs. Deep Learning

PyTorch – Machine Learning vs. Deep Learning ”; Previous Next In this chapter, we will discuss the major difference between Machine and Deep learning concepts. Amount of Data Machine learning works with different amounts of data and is mainly used for small amounts of data. Deep learning on the other hand works efficiently if the amount of data increases rapidly. The following diagram depicts the working of machine learning and deep learning with respect to amount of data − Hardware Dependencies Deep learning algorithms are designed to heavily depend on high end machines on a contrary to traditional machine learning algorithms. Deep learning algorithms perform a large amount of matrix multiplication operations which requires a huge hardware support. Feature Engineering Feature engineering is the process of putting domain knowledge into specified features to reduce the complexity of data and make patterns which are visible to learning algorithms. For instance, traditional machine learning patterns focusses on pixels and other attributes needed for feature engineering process. Deep learning algorithms focusses on high level features from data. It reduces the task of developing new feature extractor for every new problem. Print Page Previous Next Advertisements ”;

PyTorch – Loading Data

PyTorch – Loading Data ”; Previous Next PyTorch includes a package called torchvision which is used to load and prepare the dataset. It includes two basic functions namely Dataset and DataLoader which helps in transformation and loading of dataset. Dataset Dataset is used to read and transform a datapoint from the given dataset. The basic syntax to implement is mentioned below − trainset = torchvision.datasets.CIFAR10(root = ”./data”, train = True, download = True, transform = transform) DataLoader is used to shuffle and batch data. It can be used to load the data in parallel with multiprocessing workers. trainloader = torch.utils.data.DataLoader(trainset, batch_size = 4, shuffle = True, num_workers = 2) Example: Loading CSV File We use the Python package Panda to load the csv file. The original file has the following format: (image name, 68 landmarks – each landmark has a x, y coordinates). landmarks_frame = pd.read_csv(”faces/face_landmarks.csv”) n = 65 img_name = landmarks_frame.iloc[n, 0] landmarks = landmarks_frame.iloc[n, 1:].as_matrix() landmarks = landmarks.astype(”float”).reshape(-1, 2) Print Page Previous Next Advertisements ”;