PyTorch – Quick Guide ”; Previous Next PyTorch – Introduction PyTorch is defined as an open source machine learning library for Python. It is used for applications such as natural language processing. It is initially developed by Facebook artificial-intelligence research group, and Uber’s Pyro software for probabilistic programming which is built on it. Originally, PyTorch was developed by Hugh Perkins as a Python wrapper for the LusJIT based on Torch framework. There are two PyTorch variants. PyTorch redesigns and implements Torch in Python while sharing the same core C libraries for the backend code. PyTorch developers tuned this back-end code to run Python efficiently. They also kept the GPU based hardware acceleration as well as the extensibility features that made Lua-based Torch. Features The major features of PyTorch are mentioned below − Easy Interface − PyTorch offers easy to use API; hence it is considered to be very simple to operate and runs on Python. The code execution in this framework is quite easy. Python usage − This library is considered to be Pythonic which smoothly integrates with the Python data science stack. Thus, it can leverage all the services and functionalities offered by the Python environment. Computational graphs − PyTorch provides an excellent platform which offers dynamic computational graphs. Thus a user can change them during runtime. This is highly useful when a developer has no idea of how much memory is required for creating a neural network model. PyTorch is known for having three levels of abstraction as given below − Tensor − Imperative n-dimensional array which runs on GPU. Variable − Node in computational graph. This stores data and gradient. Module − Neural network layer which will store state or learnable weights. Advantages of PyTorch The following are the advantages of PyTorch − It is easy to debug and understand the code. It includes many layers as Torch. It includes lot of loss functions. It can be considered as NumPy extension to GPUs. It allows building networks whose structure is dependent on computation itself. TensorFlow vs. PyTorch We shall look into the major differences between TensorFlow and PyTorch below − PyTorch TensorFlow PyTorch is closely related to the lua-based Torch framework which is actively used in Facebook. TensorFlow is developed by Google Brain and actively used at Google. PyTorch is relatively new compared to other competitive technologies. TensorFlow is not new and is considered as a to-go tool by many researchers and industry professionals. PyTorch includes everything in imperative and dynamic manner. TensorFlow includes static and dynamic graphs as a combination. Computation graph in PyTorch is defined during runtime. TensorFlow do not include any run time option. PyTorch includes deployment featured for mobile and embedded frameworks. TensorFlow works better for embedded frameworks. PyTorch – Installation PyTorch is a popular deep learning framework. In this tutorial, we consider “Windows 10” as our operating system. The steps for a successful environmental setup are as follows − Step 1 The following link includes a list of packages which includes suitable packages for PyTorch. https://drive.google.com/drive/folders/0B-X0-FlSGfCYdTNldW02UGl4MXM All you need to do is download the respective packages and install it as shown in the following screenshots − Step 2 It involves verifying the installation of PyTorch framework using Anaconda Framework. Following command is used to verify the same − conda list “Conda list” shows the list of frameworks which is installed. The highlighted part shows that PyTorch has been successfully installed in our system. Mathematical Building Blocks of Neural Networks Mathematics is vital in any machine learning algorithm and includes various core concepts of mathematics to get the right algorithm designed in a specific way. The importance of mathematics topics for machine learning and data science is mentioned below − Now, let us focus on the major mathematical concepts of machine learning which is important from Natural Language Processing point of view − Vectors Vector is considered to be array of numbers which is either continuous or discrete and the space which consists of vectors is called as vector space. The space dimensions of vectors can be either finite or infinite but it has been observed that machine learning and data science problems deal with fixed length vectors. The vector representation is displayed as mentioned below − temp = torch.FloatTensor([23,24,24.5,26,27.2,23.0]) temp.size() Output – torch.Size([6]) In machine learning, we deal with multidimensional data. So vectors become very crucial and are considered as input features for any prediction problem statement. Scalars Scalars are termed to have zero dimensions containing only one value. When it comes to PyTorch, it does not include a special tensor with zero dimensions; hence the declaration will be made as follows − x = torch.rand(10) x.size() Output – torch.Size([10]) Matrices Most of the structured data is usually represented in the form of tables or a specific matrix. We will use a dataset called Boston House Prices, which is readily available in the Python scikit-learn machine learning library. boston_tensor = torch.from_numpy(boston.data) boston_tensor.size() Output: torch.Size([506, 13]) boston_tensor[:2] Output: Columns 0 to 7 0.0063 18.0000 2.3100 0.0000 0.5380 6.5750 65.2000 4.0900 0.0273 0.0000 7.0700 0.0000 0.4690 6.4210 78.9000 4.9671 Columns 8 to 12 1.0000 296.0000 15.3000 396.9000 4.9800 2.0000 242.0000 17.8000 396.9000 9.1400 PyTorch – Neural Network Basics The main principle of neural network includes a collection of basic elements, i.e., artificial neuron or perceptron. It includes several basic inputs such as x1, x2….. xn which produces a binary output if the sum is greater than the activation potential. The schematic representation of sample neuron is mentioned below − The output generated can be considered as the weighted sum with activation potential or bias. $$Output=sum_jw_jx_j+Bias$$ The typical neural network architecture is described below − The layers between input and output are referred to as hidden layers, and the density and type of connections between layers is the configuration. For example, a fully connected configuration has all the neurons of layer L connected to those of L+1. For a more pronounced localization, we can connect only a local neighbourhood, say nine
Category: Machine Learning
PyTorch – Discussion
Discuss PyTorch ”; Previous Next PyTorch is an open source machine learning library for Python and is completely based on Torch. It is primarily used for applications such as natural language processing. PyTorch is developed by Facebook”s artificial-intelligence research group along with Uber”s “Pyro” software for the concept of in-built probabilistic programming. Print Page Previous Next Advertisements ”;
PyTorch – Word Embedding
PyTorch – Word Embedding ”; Previous Next In this chapter, we will understand the famous word embedding model − word2vec. Word2vec model is used to produce word embedding with the help of group of related models. Word2vec model is implemented with pure C-code and the gradient are computed manually. The implementation of word2vec model in PyTorch is explained in the below steps − Step 1 Implement the libraries in word embedding as mentioned below − import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F Step 2 Implement the Skip Gram Model of word embedding with the class called word2vec. It includes emb_size, emb_dimension, u_embedding, v_embedding type of attributes. class SkipGramModel(nn.Module): def __init__(self, emb_size, emb_dimension): super(SkipGramModel, self).__init__() self.emb_size = emb_size self.emb_dimension = emb_dimension self.u_embeddings = nn.Embedding(emb_size, emb_dimension, sparse=True) self.v_embeddings = nn.Embedding(emb_size, emb_dimension, sparse = True) self.init_emb() def init_emb(self): initrange = 0.5 / self.emb_dimension self.u_embeddings.weight.data.uniform_(-initrange, initrange) self.v_embeddings.weight.data.uniform_(-0, 0) def forward(self, pos_u, pos_v, neg_v): emb_u = self.u_embeddings(pos_u) emb_v = self.v_embeddings(pos_v) score = torch.mul(emb_u, emb_v).squeeze() score = torch.sum(score, dim = 1) score = F.logsigmoid(score) neg_emb_v = self.v_embeddings(neg_v) neg_score = torch.bmm(neg_emb_v, emb_u.unsqueeze(2)).squeeze() neg_score = F.logsigmoid(-1 * neg_score) return -1 * (torch.sum(score)+torch.sum(neg_score)) def save_embedding(self, id2word, file_name, use_cuda): if use_cuda: embedding = self.u_embeddings.weight.cpu().data.numpy() else: embedding = self.u_embeddings.weight.data.numpy() fout = open(file_name, ”w”) fout.write(”%d %dn” % (len(id2word), self.emb_dimension)) for wid, w in id2word.items(): e = embedding[wid] e = ” ”.join(map(lambda x: str(x), e)) fout.write(”%s %sn” % (w, e)) def test(): model = SkipGramModel(100, 100) id2word = dict() for i in range(100): id2word[i] = str(i) model.save_embedding(id2word) Step 3 Implement the main method to get the word embedding model displayed in proper way. if __name__ == ”__main__”: test() Print Page Previous Next Advertisements ”;
PyTorch – Sequence Processing with Convents ”; Previous Next In this chapter, we propose an alternative approach which instead relies on a single 2D convolutional neural network across both sequences. Each layer of our network re-codes source tokens on the basis of the output sequence produced so far. Attention-like properties are therefore pervasive throughout the network. Here, we will focus on creating the sequential network with specific pooling from the values included in dataset. This process is also best applied in “Image Recognition Module”. Following steps are used to create a sequence processing model with convents using PyTorch − Step 1 Import the necessary modules for performance of sequence processing using convents. import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D import numpy as np Step 2 Perform the necessary operations to create a pattern in respective sequence using the below code − batch_size = 128 num_classes = 10 epochs = 12 # input image dimensions img_rows, img_cols = 28, 28 # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(60000,28,28,1) x_test = x_test.reshape(10000,28,28,1) print(”x_train shape:”, x_train.shape) print(x_train.shape[0], ”train samples”) print(x_test.shape[0], ”test samples”) y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) Step 3 Compile the model and fit the pattern in the mentioned conventional neural network model as shown below − model.compile(loss = keras.losses.categorical_crossentropy, optimizer = keras.optimizers.Adadelta(), metrics = [”accuracy”]) model.fit(x_train, y_train, batch_size = batch_size, epochs = epochs, verbose = 1, validation_data = (x_test, y_test)) score = model.evaluate(x_test, y_test, verbose = 0) print(”Test loss:”, score[0]) print(”Test accuracy:”, score[1]) The output generated is as follows − Print Page Previous Next Advertisements ”;
PyTorch – Recursive Neural Networks ”; Previous Next Deep neural networks have an exclusive feature for enabling breakthroughs in machine learning understanding the process of natural language. It is observed that most of these models treat language as a flat sequence of words or characters, and use a kind of model which is referred as recurrent neural network or RNN. Many researchers come to a conclusion that language is best understood with respect to hierarchical tree of phrases. This type is included in recursive neural networks that take a specific structure into account. PyTorch has a specific feature which helps to make these complex natural language processing models a lot easier. It is a fully-featured framework for all kinds of deep learning with strong support for computer vision. Features of Recursive Neural Network A recursive neural network is created in such a way that it includes applying same set of weights with different graph like structures. The nodes are traversed in topological order. This type of network is trained by the reverse mode of automatic differentiation. Natural language processing includes a special case of recursive neural networks. This recursive neural tensor network includes various composition functional nodes in the tree. The example of recursive neural network is demonstrated below − Print Page Previous Next Advertisements ”;
PyTorch – Visualization of Convents ”; Previous Next In this chapter, we will be focusing on the data visualization model with the help of convents. Following steps are required to get a perfect picture of visualization with conventional neural network. Step 1 Import the necessary modules which is important for the visualization of conventional neural networks. import os import numpy as np import pandas as pd from scipy.misc import imread from sklearn.metrics import accuracy_score import keras from keras.models import Sequential, Model from keras.layers import Dense, Dropout, Flatten, Activation, Input from keras.layers import Conv2D, MaxPooling2D import torch Step 2 To stop potential randomness with training and testing data, call the respective data set as given in the code below − seed = 128 rng = np.random.RandomState(seed) data_dir = “../../datasets/MNIST” train = pd.read_csv(”../../datasets/MNIST/train.csv”) test = pd.read_csv(”../../datasets/MNIST/Test_fCbTej3.csv”) img_name = rng.choice(train.filename) filepath = os.path.join(data_dir, ”train”, img_name) img = imread(filepath, flatten=True) Step 3 Plot the necessary images to get the training and testing data defined in perfect way using the below code − pylab.imshow(img, cmap =”gray”) pylab.axis(”off”) pylab.show() The output is displayed as below − Print Page Previous Next Advertisements ”;
PyTorch – Introduction to Convents ”; Previous Next Convents is all about building the CNN model from scratch. The network architecture will contain a combination of following steps − Conv2d MaxPool2d Rectified Linear Unit View Linear Layer Training the Model Training the model is the same process like image classification problems. The following code snippet completes the procedure of a training model on the provided dataset − def fit(epoch,model,data_loader,phase = ”training”,volatile = False): if phase == ”training”: model.train() if phase == ”training”: model.train() if phase == ”validation”: model.eval() volatile=True running_loss = 0.0 running_correct = 0 for batch_idx , (data,target) in enumerate(data_loader): if is_cuda: data,target = data.cuda(),target.cuda() data , target = Variable(data,volatile),Variable(target) if phase == ”training”: optimizer.zero_grad() output = model(data) loss = F.nll_loss(output,target) running_loss + = F.nll_loss(output,target,size_average = False).data[0] preds = output.data.max(dim = 1,keepdim = True)[1] running_correct + = preds.eq(target.data.view_as(preds)).cpu().sum() if phase == ”training”: loss.backward() optimizer.step() loss = running_loss/len(data_loader.dataset) accuracy = 100. * running_correct/len(data_loader.dataset) print(f”{phase} loss is {loss:{5}.{2}} and {phase} accuracy is {running_correct}/{len(data_loader.dataset)}{accuracy:{return loss,accuracy}}) The method includes different logic for training and validation. There are two primary reasons for using different modes − In train mode, dropout removes a percentage of values, which should not happen in the validation or testing phase. For training mode, we calculate gradients and change the model”s parameters value, but back propagation is not required during the testing or validation phases. Print Page Previous Next Advertisements ”;
PyTorch – Implementing First Neural Network ”; Previous Next PyTorch includes a special feature of creating and implementing neural networks. In this chapter, we will create a simple neural network with one hidden layer developing a single output unit. We shall use following steps to implement the first neural network using PyTorch − Step 1 First, we need to import the PyTorch library using the below command − import torch import torch.nn as nn Step 2 Define all the layers and the batch size to start executing the neural network as shown below − # Defining input size, hidden layer size, output size and batch size respectively n_in, n_h, n_out, batch_size = 10, 5, 1, 10 Step 3 As neural network includes a combination of input data to get the respective output data, we will be following the same procedure as given below − # Create dummy input and target tensors (data) x = torch.randn(batch_size, n_in) y = torch.tensor([[1.0], [0.0], [0.0], [1.0], [1.0], [1.0], [0.0], [0.0], [1.0], [1.0]]) Step 4 Create a sequential model with the help of in-built functions. Using the below lines of code, create a sequential model − # Create a model model = nn.Sequential(nn.Linear(n_in, n_h), nn.ReLU(), nn.Linear(n_h, n_out), nn.Sigmoid()) Step 5 Construct the loss function with the help of Gradient Descent optimizer as shown below − Construct the loss function criterion = torch.nn.MSELoss() # Construct the optimizer (Stochastic Gradient Descent in this case) optimizer = torch.optim.SGD(model.parameters(), lr = 0.01) Step 6 Implement the gradient descent model with the iterating loop with the given lines of code − # Gradient Descent for epoch in range(50): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = criterion(y_pred, y) print(”epoch: ”, epoch,” loss: ”, loss.item()) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() # perform a backward pass (backpropagation) loss.backward() # Update the parameters optimizer.step() Step 7 The output generated is as follows − epoch: 0 loss: 0.2545787990093231 epoch: 1 loss: 0.2545052170753479 epoch: 2 loss: 0.254431813955307 epoch: 3 loss: 0.25435858964920044 epoch: 4 loss: 0.2542854845523834 epoch: 5 loss: 0.25421255826950073 epoch: 6 loss: 0.25413978099823 epoch: 7 loss: 0.25406715273857117 epoch: 8 loss: 0.2539947032928467 epoch: 9 loss: 0.25392240285873413 epoch: 10 loss: 0.25385022163391113 epoch: 11 loss: 0.25377824902534485 epoch: 12 loss: 0.2537063956260681 epoch: 13 loss: 0.2536346912384033 epoch: 14 loss: 0.25356316566467285 epoch: 15 loss: 0.25349172949790955 epoch: 16 loss: 0.25342053174972534 epoch: 17 loss: 0.2533493936061859 epoch: 18 loss: 0.2532784342765808 epoch: 19 loss: 0.25320762395858765 epoch: 20 loss: 0.2531369626522064 epoch: 21 loss: 0.25306645035743713 epoch: 22 loss: 0.252996027469635 epoch: 23 loss: 0.2529257833957672 epoch: 24 loss: 0.25285571813583374 epoch: 25 loss: 0.25278574228286743 epoch: 26 loss: 0.25271597504615784 epoch: 27 loss: 0.25264623761177063 epoch: 28 loss: 0.25257670879364014 epoch: 29 loss: 0.2525072991847992 epoch: 30 loss: 0.2524380087852478 epoch: 31 loss: 0.2523689270019531 epoch: 32 loss: 0.25229987502098083 epoch: 33 loss: 0.25223103165626526 epoch: 34 loss: 0.25216227769851685 epoch: 35 loss: 0.252093642950058 epoch: 36 loss: 0.25202515721321106 epoch: 37 loss: 0.2519568204879761 epoch: 38 loss: 0.251888632774353 epoch: 39 loss: 0.25182053446769714 epoch: 40 loss: 0.2517525553703308 epoch: 41 loss: 0.2516847252845764 epoch: 42 loss: 0.2516169846057892 epoch: 43 loss: 0.2515493929386139 epoch: 44 loss: 0.25148195028305054 epoch: 45 loss: 0.25141456723213196 epoch: 46 loss: 0.2513473629951477 epoch: 47 loss: 0.2512802183628082 epoch: 48 loss: 0.2512132525444031 epoch: 49 loss: 0.2511464059352875 Print Page Previous Next Advertisements ”;
PyTorch – Useful Resources
PyTorch – Useful Resources ”; Previous Next The following resources contain additional information on PyTorch. Please use them to get more in-depth knowledge on this. Useful Video Courses Deeplearning: Convolutional Neural Network for Developers 11 Lectures 2.5 hours Alexsandro Souza More Detail Deep Learning Training Course 81 Lectures 10.5 hours Corporate Bridge Consultancy Private Limited More Detail Practical Deep Learning for Image Segmentation with Python and PyTorch Most Popular 34 Lectures 3 hours Mazhar Hussain More Detail Mastering Recurrent Neural Networks, Theory and Practice in Python 96 Lectures 13.5 hours AI Sciences More Detail Reinforcement Learning & Deep RL Python(Theory & Projects) 156 Lectures 14 hours AI Sciences More Detail Deep Learning ANN Artificial Neural Networks with Python Featured 105 Lectures 10 hours AI Sciences More Detail Print Page Previous Next Advertisements ”;
PyTorch – Neural Networks to Functional Blocks ”; Previous Next Training a deep learning algorithm involves the following steps − Building a data pipeline Building a network architecture Evaluating the architecture using a loss function Optimizing the network architecture weights using an optimization algorithm Training a specific deep learning algorithm is the exact requirement of converting a neural network to functional blocks as shown below − With respect to the above diagram, any deep learning algorithm involves getting the input data, building the respective architecture which includes a bunch of layers embedded in them. If you observe the above diagram, the accuracy is evaluated using a loss function with respect to optimization of the weights of neural network. Print Page Previous Next Advertisements ”;