PyTorch – Installation ”; Previous Next PyTorch is a popular deep learning framework. In this tutorial, we consider “Windows 10” as our operating system. The steps for a successful environmental setup are as follows − Step 1 The following link includes a list of packages which includes suitable packages for PyTorch. https://drive.google.com/drive/folders/0B-X0-FlSGfCYdTNldW02UGl4MXM All you need to do is download the respective packages and install it as shown in the following screenshots − Step 2 It involves verifying the installation of PyTorch framework using Anaconda Framework. Following command is used to verify the same − conda list “Conda list” shows the list of frameworks which is installed. The highlighted part shows that PyTorch has been successfully installed in our system. Print Page Previous Next Advertisements ”;
Category: Machine Learning
PyTorch – Home
PyTorch Tutorial PDF Version Quick Guide Resources Job Search Discussion PyTorch is an open source machine learning library for Python and is completely based on Torch. It is primarily used for applications such as natural language processing. PyTorch is developed by Facebook”s artificial-intelligence research group along with Uber”s “Pyro” software for the concept of in-built probabilistic programming. Audience This tutorial has been prepared for python developers who focus on research and development with machinelearning algorithms along with natural language processing system. The aim of this tutorial is to completely describe all concepts of PyTorch and realworld examples of the same. Prerequisites Before proceeding with this tutorial, you need knowledge of Python and Anaconda framework (commands used in Anaconda). Having knowledge of artificial intelligence concepts will be an added advantage. Print Page Previous Next Advertisements ”;
Universal Workflow of Machine Learning ”; Previous Next Artificial Intelligence is trending nowadays to a greater extent. Machine learning and deep learning constitutes artificial intelligence. The Venn diagram mentioned below explains the relationship of machine learning and deep learning. Machine Learning Machine learning is the art of science which allows computers to act as per the designed and programmed algorithms. Many researchers think machine learning is the best way to make progress towards human-level AI. It includes various types of patterns like − Supervised Learning Pattern Unsupervised Learning Pattern Deep Learning Deep learning is a subfield of machine learning where concerned algorithms are inspired by the structure and function of the brain called Artificial Neural Networks. Deep learning has gained much importance through supervised learning or learning from labelled data and algorithms. Each algorithm in deep learning goes through same process. It includes hierarchy of nonlinear transformation of input and uses to create a statistical model as output. Machine learning process is defined using following steps − Identifies relevant data sets and prepares them for analysis. Chooses the type of algorithm to use. Builds an analytical model based on the algorithm used. Trains the model on test data sets, revising it as needed. Runs the model to generate test scores. Print Page Previous Next Advertisements ”;
PyTorch – Linear Regression
PyTorch – Linear Regression ”; Previous Next In this chapter, we will be focusing on basic example of linear regression implementation using TensorFlow. Logistic regression or linear regression is a supervised machine learning approach for the classification of order discrete categories. Our goal in this chapter is to build a model by which a user can predict the relationship between predictor variables and one or more independent variables. The relationship between these two variables is considered linear i.e., if y is the dependent variable and x is considered as the independent variable, then the linear regression relationship of two variables will look like the equation which is mentioned as below − Y = Ax+b Next, we shall design an algorithm for linear regression which allows us to understand two important concepts given below − Cost Function Gradient Descent Algorithms The schematic representation of linear regression is mentioned below Interpreting the result $$Y=ax+b$$ The value of a is the slope. The value of b is the y − intercept. r is the correlation coefficient. r2 is the correlation coefficient. The graphical view of the equation of linear regression is mentioned below − Following steps are used for implementing linear regression using PyTorch − Step 1 Import the necessary packages for creating a linear regression in PyTorch using the below code − import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation import seaborn as sns import pandas as pd %matplotlib inline sns.set_style(style = ”whitegrid”) plt.rcParams[“patch.force_edgecolor”] = True Step 2 Create a single training set with the available data set as shown below − m = 2 # slope c = 3 # interceptm = 2 # slope c = 3 # intercept x = np.random.rand(256) noise = np.random.randn(256) / 4 y = x * m + c + noise df = pd.DataFrame() df[”x”] = x df[”y”] = y sns.lmplot(x =”x”, y =”y”, data = df) Step 3 Implement linear regression with PyTorch libraries as mentioned below − import torch import torch.nn as nn from torch.autograd import Variable x_train = x.reshape(-1, 1).astype(”float32”) y_train = y.reshape(-1, 1).astype(”float32”) class LinearRegressionModel(nn.Module): def __init__(self, input_dim, output_dim): super(LinearRegressionModel, self).__init__() self.linear = nn.Linear(input_dim, output_dim) def forward(self, x): out = self.linear(x) return out input_dim = x_train.shape[1] output_dim = y_train.shape[1] input_dim, output_dim(1, 1) model = LinearRegressionModel(input_dim, output_dim) criterion = nn.MSELoss() [w, b] = model.parameters() def get_param_values(): return w.data[0][0], b.data[0] def plot_current_fit(title = “”): plt.figure(figsize = (12,4)) plt.title(title) plt.scatter(x, y, s = 8) w1 = w.data[0][0] b1 = b.data[0] x1 = np.array([0., 1.]) y1 = x1 * w1 + b1 plt.plot(x1, y1, ”r”, label = ”Current Fit ({:.3f}, {:.3f})”.format(w1, b1)) plt.xlabel(”x (input)”) plt.ylabel(”y (target)”) plt.legend() plt.show() plot_current_fit(”Before training”) The plot generated is as follows − Print Page Previous Next Advertisements ”;
Python Scatter Plots
Python – Scatter Plots ”; Previous Next Scatterplots show many points plotted in the Cartesian plane. Each point represents the values of two variables. One variable is chosen in the horizontal axis and another in the vertical axis. Drawing a Scatter Plot Scatter plot can be created using the DataFrame.plot.scatter() methods. import pandas as pd import numpy as np df = pd.DataFrame(np.random.rand(50, 4), columns=[”a”, ”b”, ”c”, ”d”]) df.plot.scatter(x=”a”, y=”b”) Its output is as follows − Print Page Previous Next Advertisements ”;
Python Linear Regression
Python – Linear Regression ”; Previous Next In Linear Regression these two variables are related through an equation, where exponent (power) of both these variables is 1. Mathematically a linear relationship represents a straight line when plotted as a graph. A non-linear relationship where the exponent of any variable is not equal to 1 creates a curve. The functions in Seaborn to find the linear regression relationship is regplot. The below example shows its use. import seaborn as sb from matplotlib import pyplot as plt df = sb.load_dataset(”tips”) sb.regplot(x = “total_bill”, y = “tip”, data = df) plt.show() Its output is as follows − Print Page Previous Next Advertisements ”;
Python Binomial Distribution
Python – Binomial Distribution ”; Previous Next The binomial distribution model deals with finding the probability of success of an event which has only two possible outcomes in a series of experiments. For example, tossing of a coin always gives a head or a tail. The probability of finding exactly 3 heads in tossing a coin repeatedly for 10 times is estimated during the binomial distribution. We use the seaborn python library which has in-built functions to create such probability distribution graphs. Also, the scipy package helps is creating the binomial distribution. from scipy.stats import binom import seaborn as sb binom.rvs(size=10,n=20,p=0.8) data_binom = binom.rvs(n=20,p=0.8,loc=0,size=1000) ax = sb.distplot(data_binom, kde=True, color=”blue”, hist_kws={“linewidth”: 25,”alpha”:1}) ax.set(xlabel=”Binomial”, ylabel=”Frequency”) Its output is as follows − Print Page Previous Next Advertisements ”;
Python – Bernoulli Distribution ”; Previous Next The Bernoulli distribution is a special case of the Binomial distribution where a single experiment is conducted so that the number of observation is 1. So, the Bernoulli distribution therefore describes events having exactly two outcomes. We use various functions in numpy library to mathematically calculate the values for a bernoulli distribution. Histograms are created over which we plot the probability distribution curve. from scipy.stats import bernoulli import seaborn as sb data_bern = bernoulli.rvs(size=1000,p=0.6) ax = sb.distplot(data_bern, kde=True, color=”crimson”, hist_kws={“linewidth”: 25,”alpha”:1}) ax.set(xlabel=”Bernouli”, ylabel=”Frequency”) Its output is as follows − Print Page Previous Next Advertisements ”;
Python Graph Data
Python – Graph Data ”; Previous Next CSGraph stands for Compressed Sparse Graph, which focuses on Fast graph algorithms based on sparse matrix representations. Graph Representations To begin with, let us understand what a sparse graph is and how it helps in graph representations. What exactly is a sparse graph? A graph is just a collection of nodes, which have links between them. Graphs can represent nearly anything − social network connections, where each node is a person and is connected to acquaintances; images, where each node is a pixel and is connected to neighbouring pixels; points in a high-dimensional distribution, where each node is connected to its nearest neighbours and practically anything else you can imagine. One very efficient way to represent graph data is in a sparse matrix: let us call it G. The matrix G is of size N x N, and G[i, j] gives the value of the connection between node ‘i” and node ‘j’. A sparse graph contains mostly zeros − that is, most nodes have only a few connections. This property turns out to be true in most cases of interest. The creation of the sparse graph submodule was motivated by several algorithms used in scikit-learn that included the following − Isomap − A manifold learning algorithm, which requires finding the shortest paths in a graph. Hierarchical clustering − A clustering algorithm based on a minimum spanning tree. Spectral Decomposition − A projection algorithm based on sparse graph laplacians. As a concrete example, imagine that we would like to represent the following undirected graph − This graph has three nodes, where node 0 and 1 are connected by an edge of weight 2, and nodes 0 and 2 are connected by an edge of weight 1. We can construct the dense, masked and sparse representations as shown in the following example, keeping in mind that an undirected graph is represented by a symmetric matrix. G_dense = np.array([ [0, 2, 1], [2, 0, 0], [1, 0, 0] ]) G_masked = np.ma.masked_values(G_dense, 0) from scipy.sparse import csr_matrix G_sparse = csr_matrix(G_dense) print G_sparse.data The above program will generate the following output. array([2, 1, 2, 1]) This is identical to the previous graph, except nodes 0 and 2 are connected by an edge of zero weight. In this case, the dense representation above leads to ambiguities − how can non-edges be represented, if zero is a meaningful value. In this case, either a masked or a sparse representation must be used to eliminate the ambiguity. Let us consider the following example. from scipy.sparse.csgraph import csgraph_from_dense G2_data = np.array ([ [np.inf, 2, 0 ], [2, np.inf, np.inf], [0, np.inf, np.inf] ]) G2_sparse = csgraph_from_dense(G2_data, null_value=np.inf) print G2_sparse.data The above program will generate the following output. array([ 2., 0., 2., 0.]) Print Page Previous Next Advertisements ”;
Python Data Science – Home
Python for Data Science Tutorial Data is the new Oil. This statement shows how every modern IT system is driven by capturing, storing and analysing data for various needs. Be it about making decision for business, forecasting weather, studying protein structures in biology or designing a marketing campaign. All of these scenarios involve a multidisciplinary approach of using mathematical models, statistics, graphs, databases and of course the business or scientific logic behind the data analysis. So we need a programming language which can cater to all these diverse needs of data science. Python shines bright as one such language as it has numerous libraries and built in features which makes it easy to tackle the needs of Data science. In this tutorial we will cover these the various techniques used in data science using the Python programming language. Audience This tutorial is designed for Computer Science graduates as well as Software Professionals who are willing to learn data science in simple and easy steps using Python as a programming language. Prerequisites Before proceeding with this tutorial, you should have a basic knowledge of writing code in Python programming language, using any python IDE and execution of Python programs. If you are completely new to python then please refer our Python tutorial to get a sound understanding of the language. Execute Python Programs For most of the examples given in this tutorial you will find Try it option, so just make use of it and enjoy your learning. Try following example using Try it option available at the top right corner of the below sample code box #!/usr/bin/python print “Hello, Python!” Print Page Previous Next Advertisements ”;