Learn TensorFlow – Linear Regression work project make money

TensorFlow – Linear Regression In this chapter, we will focus on the basic example of linear regression implementation using TensorFlow. Logistic regression or linear regression is a supervised machine learning approach for the classification of order discrete categories. Our goal in this chapter is to build a model by which a user can predict the relationship between predictor variables and one or more independent variables. The relationship between these two variables is cons −idered linear. If y is the dependent variable and x is considered as the independent variable, then the linear regression relationship of two variables will look like the following equation − Y = Ax+b We will design an algorithm for linear regression. This will allow us to understand the following two important concepts − Cost Function Gradient descent algorithms The schematic representation of linear regression is mentioned below − The graphical view of the equation of linear regression is mentioned below − Steps to design an algorithm for linear regression We will now learn about the steps that help in designing an algorithm for linear regression. Step 1 It is important to import the necessary modules for plotting the linear regression module. We start importing the Python library NumPy and Matplotlib. import numpy as np import matplotlib.pyplot as plt Step 2 Define the number of coefficients necessary for logistic regression. number_of_points = 500 x_point = [] y_point = [] a = 0.22 b = 0.78 Step 3 Iterate the variables for generating 300 random points around the regression equation − Y = 0.22x+0.78 for i in range(number_of_points): x = np.random.normal(0.0,0.5) y = a*x + b +np.random.normal(0.0,0.1) x_point.append([x]) y_point.append([y]) Step 4 View the generated points using Matplotlib. fplt.plot(x_point,y_point, ”o”, label = ”Input Data”) plt.legend() plt.show() The complete code for logistic regression is as follows − import numpy as np import matplotlib.pyplot as plt number_of_points = 500 x_point = [] y_point = [] a = 0.22 b = 0.78 for i in range(number_of_points): x = np.random.normal(0.0,0.5) y = a*x + b +np.random.normal(0.0,0.1) x_point.append([x]) y_point.append([y]) plt.plot(x_point,y_point, ”o”, label = ”Input Data”) plt.legend() plt.show() The number of points which is taken as input is considered as input data.

Learn Recurrent Neural Networks work project make money

TensorFlow – Recurrent Neural Networks Recurrent neural networks is a type of deep learning-oriented algorithm, which follows a sequential approach. In neural networks, we always assume that each input and output is independent of all other layers. These type of neural networks are called recurrent because they perform mathematical computations in sequential manner. Consider the following steps to train a recurrent neural network − Step 1 − Input a specific example from dataset. Step 2 − Network will take an example and compute some calculations using randomly initialized variables. Step 3 − A predicted result is then computed. Step 4 − The comparison of actual result generated with the expected value will produce an error. Step 5 − To trace the error, it is propagated through same path where the variables are also adjusted. Step 6 − The steps from 1 to 5 are repeated until we are confident that the variables declared to get the output are defined properly. Step 7 − A systematic prediction is made by applying these variables to get new unseen input. The schematic approach of representing recurrent neural networks is described below − Recurrent Neural Network Implementation with TensorFlow In this section, we will learn how to implement recurrent neural network with TensorFlow. Step 1 − TensorFlow includes various libraries for specific implementation of the recurrent neural network module. #Import necessary modules from __future__ import print_function import tensorflow as tf from tensorflow.contrib import rnn from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets(“/tmp/data/”, one_hot = True) As mentioned above, the libraries help in defining the input data, which forms the primary part of recurrent neural network implementation. Step 2 − Our primary motive is to classify the images using a recurrent neural network, where we consider every image row as a sequence of pixels. MNIST image shape is specifically defined as 28*28 px. Now we will handle 28 sequences of 28 steps for each sample that is mentioned. We will define the input parameters to get the sequential pattern done. n_input = 28 # MNIST data input with img shape 28*28 n_steps = 28 n_hidden = 128 n_classes = 10 # tf Graph input x = tf.placeholder(“float”, [None, n_steps, n_input]) y = tf.placeholder(“float”, [None, n_classes] weights = { ”out”: tf.Variable(tf.random_normal([n_hidden, n_classes])) } biases = { ”out”: tf.Variable(tf.random_normal([n_classes])) } Step 3 − Compute the results using a defined function in RNN to get the best results. Here, each data shape is compared with current input shape and the results are computed to maintain the accuracy rate. def RNN(x, weights, biases): x = tf.unstack(x, n_steps, 1) # Define a lstm cell with tensorflow lstm_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) # Get lstm cell output outputs, states = rnn.static_rnn(lstm_cell, x, dtype = tf.float32) # Linear activation, using rnn inner loop last output return tf.matmul(outputs[-1], weights[”out”]) + biases[”out”] pred = RNN(x, weights, biases) # Define loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = pred, labels = y)) optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost) # Evaluate model correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Initializing the variables init = tf.global_variables_initializer() Step 4 − In this step, we will launch the graph to get the computational results. This also helps in calculating the accuracy for test results. with tf.Session() as sess: sess.run(init) step = 1 # Keep training until reach max iterations while step * batch_size < training_iters: batch_x, batch_y = mnist.train.next_batch(batch_size) batch_x = batch_x.reshape((batch_size, n_steps, n_input)) sess.run(optimizer, feed_dict={x: batch_x, y: batch_y}) if step % display_step == 0: # Calculate batch accuracy acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y}) # Calculate batch loss loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y}) print(“Iter ” + str(step*batch_size) + “, Minibatch Loss= ” + “{:.6f}”.format(loss) + “, Training Accuracy= ” + “{:.5f}”.format(acc)) step += 1 print(“Optimization Finished!”) test_len = 128 test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input)) test_label = mnist.test.labels[:test_len] print(“Testing Accuracy:”, sess.run(accuracy, feed_dict={x: test_data, y: test_label})) The screenshots below show the output generated −

Learn Mathematical Foundations work project make money

TensorFlow – Mathematical Foundations It is important to understand mathematical concepts needed for TensorFlow before creating the basic application in TensorFlow. Mathematics is considered as the heart of any machine learning algorithm. It is with the help of core concepts of Mathematics, a solution for specific machine learning algorithm is defined. Vector An array of numbers, which is either continuous or discrete, is defined as a vector. Machine learning algorithms deal with fixed length vectors for better output generation. Machine learning algorithms deal with multidimensional data so vectors play a crucial role. The pictorial representation of vector model is as shown below − Scalar Scalar can be defined as one-dimensional vector. Scalars are those, which include only magnitude and no direction. With scalars, we are only concerned with the magnitude. Examples of scalar include weight and height parameters of children. Matrix Matrix can be defined as multi-dimensional arrays, which are arranged in the format of rows and columns. The size of matrix is defined by row length and column length. Following figure shows the representation of any specified matrix. Consider the matrix with “m” rows and “n” columns as mentioned above, the matrix representation will be specified as “m*n matrix” which defined the length of matrix as well. Mathematical Computations In this section, we will learn about the different Mathematical Computations in TensorFlow. Addition of matrices Addition of two or more matrices is possible if the matrices are of the same dimension. The addition implies addition of each element as per the given position. Consider the following example to understand how addition of matrices works − $$Example:A=begin{bmatrix}1 & 2 \3 & 4 end{bmatrix}B=begin{bmatrix}5 & 6 \7 & 8 end{bmatrix}:then:A+B=begin{bmatrix}1+5 & 2+6 \3+7 & 4+8 end{bmatrix}=begin{bmatrix}6 & 8 \10 & 12 end{bmatrix}$$ Subtraction of matrices The subtraction of matrices operates in similar fashion like the addition of two matrices. The user can subtract two matrices provided the dimensions are equal. $$Example:A-begin{bmatrix}1 & 2 \3 & 4 end{bmatrix}B-begin{bmatrix}5 & 6 \7 & 8 end{bmatrix}:then:A-B-begin{bmatrix}1-5 & 2-6 \3-7 & 4-8 end{bmatrix}-begin{bmatrix}-4 & -4 \-4 & -4 end{bmatrix}$$ Multiplication of matrices For two matrices A m*n and B p*q to be multipliable, n should be equal to p. The resulting matrix is − C m*q $$A=begin{bmatrix}1 & 2 \3 & 4 end{bmatrix}B=begin{bmatrix}5 & 6 \7 & 8 end{bmatrix}$$ $$c_{11}=begin{bmatrix}1 & 2 end{bmatrix}begin{bmatrix}5 \7 end{bmatrix}=1times5+2times7=19:c_{12}=begin{bmatrix}1 & 2 end{bmatrix}begin{bmatrix}6 \8 end{bmatrix}=1times6+2times8=22$$ $$c_{21}=begin{bmatrix}3 & 4 end{bmatrix}begin{bmatrix}5 \7 end{bmatrix}=3times5+4times7=43:c_{22}=begin{bmatrix}3 & 4 end{bmatrix}begin{bmatrix}6 \8 end{bmatrix}=3times6+4times8=50$$ $$C=begin{bmatrix}c_{11} & c_{12} \c_{21} & c_{22} end{bmatrix}=begin{bmatrix}19 & 22 \43 & 50 end{bmatrix}$$ Transpose of matrix The transpose of a matrix A, m*n is generally represented by AT (transpose) n*m and is obtained by transposing the column vectors as row vectors. $$Example:A=begin{bmatrix}1 & 2 \3 & 4 end{bmatrix}:then:A^{T}begin{bmatrix}1 & 3 \2 & 4 end{bmatrix}$$ Dot product of vectors Any vector of dimension n can be represented as a matrix v = R^n*1. $$v_{1}=begin{bmatrix}v_{11} \v_{12} \cdot\cdot\cdot\v_{1n}end{bmatrix}v_{2}=begin{bmatrix}v_{21} \v_{22} \cdot\cdot\cdot\v_{2n}end{bmatrix}$$ The dot product of two vectors is the sum of the product of corresponding components − Components along the same dimension and can be expressed as $$v_{1}cdot v_{2}=v_1^Tv_{2}=v_2^Tv_{1}=v_{11}v_{21}+v_{12}v_{22}+cdotcdot+v_{1n}v_{2n}=displaystylesumlimits_{k=1}^n v_{1k}v_{2k}$$ The example of dot product of vectors is mentioned below − $$Example:v_{1}=begin{bmatrix}1 \2 \3end{bmatrix}v_{2}=begin{bmatrix}3 \5 \-1end{bmatrix}v_{1}cdot v_{2}=v_1^Tv_{2}=1times3+2times5-3times1=10$$

Learn TensorFlow – Basics work project make money

TensorFlow – Basics In this chapter, we will learn about the basics of TensorFlow. We will begin by understanding the data structure of tensor. Tensor Data Structure Tensors are used as the basic data structures in TensorFlow language. Tensors represent the connecting edges in any flow diagram called the Data Flow Graph. Tensors are defined as multidimensional array or list. Tensors are identified by the following three parameters − Rank Unit of dimensionality described within tensor is called rank. It identifies the number of dimensions of the tensor. A rank of a tensor can be described as the order or n-dimensions of a tensor defined. Shape The number of rows and columns together define the shape of Tensor. Type Type describes the data type assigned to Tensor’s elements. A user needs to consider the following activities for building a Tensor − Build an n-dimensional array Convert the n-dimensional array. Various Dimensions of TensorFlow TensorFlow includes various dimensions. The dimensions are described in brief below − One dimensional Tensor One dimensional tensor is a normal array structure which includes one set of values of the same data type. Declaration >>> import numpy as np >>> tensor_1d = np.array([1.3, 1, 4.0, 23.99]) >>> print tensor_1d The implementation with the output is shown in the screenshot below − The indexing of elements is same as Python lists. The first element starts with index of 0; to print the values through index, all you need to do is mention the index number. >>> print tensor_1d[0] 1.3 >>> print tensor_1d[2] 4.0 Two dimensional Tensors Sequence of arrays are used for creating “two dimensional tensors”. The creation of two-dimensional tensors is described below − Following is the complete syntax for creating two dimensional arrays − >>> import numpy as np >>> tensor_2d = np.array([(1,2,3,4),(4,5,6,7),(8,9,10,11),(12,13,14,15)]) >>> print(tensor_2d) [[ 1 2 3 4] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] >>> The specific elements of two dimensional tensors can be tracked with the help of row number and column number specified as index numbers. >>> tensor_2d[3][2] 14 Tensor Handling and Manipulations In this section, we will learn about Tensor Handling and Manipulations. To begin with, let us consider the following code − import tensorflow as tf import numpy as np matrix1 = np.array([(2,2,2),(2,2,2),(2,2,2)],dtype = ”int32”) matrix2 = np.array([(1,1,1),(1,1,1),(1,1,1)],dtype = ”int32”) print (matrix1) print (matrix2) matrix1 = tf.constant(matrix1) matrix2 = tf.constant(matrix2) matrix_product = tf.matmul(matrix1, matrix2) matrix_sum = tf.add(matrix1,matrix2) matrix_3 = np.array([(2,7,2),(1,4,2),(9,0,2)],dtype = ”float32”) print (matrix_3) matrix_det = tf.matrix_determinant(matrix_3) with tf.Session() as sess: result1 = sess.run(matrix_product) result2 = sess.run(matrix_sum) result3 = sess.run(matrix_det) print (result1) print (result2) print (result3) Output The above code will generate the following output − Explanation We have created multidimensional arrays in the above source code. Now, it is important to understand that we created graph and sessions, which manage the Tensors and generate the appropriate output. With the help of graph, we have the output specifying the mathematical calculations between Tensors.

Learn Single Layer Perceptron work project make money

TensorFlow – Single Layer Perceptron For understanding single layer perceptron, it is important to understand Artificial Neural Networks (ANN). Artificial neural networks is the information processing system the mechanism of which is inspired with the functionality of biological neural circuits. An artificial neural network possesses many processing units connected to each other. Following is the schematic representation of artificial neural network − The diagram shows that the hidden units communicate with the external layer. While the input and output units communicate only through the hidden layer of the network. The pattern of connection with nodes, the total number of layers and level of nodes between inputs and outputs with the number of neurons per layer define the architecture of a neural network. There are two types of architecture. These types focus on the functionality artificial neural networks as follows − Single Layer Perceptron Multi-Layer Perceptron Single Layer Perceptron Single layer perceptron is the first proposed neural model created. The content of the local memory of the neuron consists of a vector of weights. The computation of a single layer perceptron is performed over the calculation of sum of the input vector each with the value multiplied by corresponding element of vector of the weights. The value which is displayed in the output will be the input of an activation function. Let us focus on the implementation of single layer perceptron for an image classification problem using TensorFlow. The best example to illustrate the single layer perceptron is through representation of “Logistic Regression”. Now, let us consider the following basic steps of training logistic regression − The weights are initialized with random values at the beginning of the training. For each element of the training set, the error is calculated with the difference between desired output and the actual output. The error calculated is used to adjust the weights. The process is repeated until the error made on the entire training set is not less than the specified threshold, until the maximum number of iterations is reached. The complete code for evaluation of logistic regression is mentioned below − # Import MINST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets(“/tmp/data/”, one_hot = True) import tensorflow as tf import matplotlib.pyplot as plt # Parameters learning_rate = 0.01 training_epochs = 25 batch_size = 100 display_step = 1 # tf Graph Input x = tf.placeholder(“float”, [None, 784]) # mnist data image of shape 28*28 = 784 y = tf.placeholder(“float”, [None, 10]) # 0-9 digits recognition => 10 classes # Create model # Set model weights W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) # Construct model activation = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax # Minimize error using cross entropy cross_entropy = y*tf.log(activation) cost = tf.reduce_mean (-tf.reduce_sum (cross_entropy,reduction_indices = 1)) optimizer = tf.train. GradientDescentOptimizer(learning_rate).minimize(cost) #Plot settings avg_set = [] epoch_set = [] # Initializing the variables init = tf.initialize_all_variables() # Launch the graph with tf.Session() as sess: sess.run(init) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Fit training using batch data sess.run(optimizer, feed_dict = { x: batch_xs, y: batch_ys}) # Compute average loss avg_cost += sess.run(cost, feed_dict = { x: batch_xs, y: batch_ys})/total_batch # Display logs per epoch step if epoch % display_step == 0: print (“Epoch:”, ”%04d” % (epoch+1), “cost=”, “{:.9f}”.format(avg_cost)) avg_set.append(avg_cost) epoch_set.append(epoch+1) print (“Training phase finished”) plt.plot(epoch_set,avg_set, ”o”, label = ”Logistic Regression Training phase”) plt.ylabel(”cost”) plt.xlabel(”epoch”) plt.legend() plt.show() # Test model correct_prediction = tf.equal(tf.argmax(activation, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, “float”)) print (“Model accuracy:”, accuracy.eval({x: mnist.test.images, y: mnist.test.labels})) Output The above code generates the following output − The logistic regression is considered as a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal or independent variables.