Learn TensorFlow – Forming Graphs work project make money

TensorFlow – Forming Graphs A partial differential equation (PDE) is a differential equation, which involves partial derivatives with unknown function of several independent variables. With reference to partial differential equations, we will focus on creating new graphs. Let us assume there is a pond with dimension 500*500 square − N = 500 Now, we will compute partial differential equation and form the respective graph using it. Consider the steps given below for computing graph. Step 1 − Import libraries for simulation. import tensorflow as tf import numpy as np import matplotlib.pyplot as plt Step 2 − Include functions for transformation of a 2D array into a convolution kernel and simplified 2D convolution operation. def make_kernel(a): a = np.asarray(a) a = a.reshape(list(a.shape) + [1,1]) return tf.constant(a, dtype=1) def simple_conv(x, k): “””A simplified 2D convolution operation””” x = tf.expand_dims(tf.expand_dims(x, 0), -1) y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding = ”SAME”) return y[0, :, :, 0] def laplace(x): “””Compute the 2D laplacian of an array””” laplace_k = make_kernel([[0.5, 1.0, 0.5], [1.0, -6., 1.0], [0.5, 1.0, 0.5]]) return simple_conv(x, laplace_k) sess = tf.InteractiveSession() Step 3 − Include the number of iterations and compute the graph to display the records accordingly. N = 500 # Initial Conditions — some rain drops hit a pond # Set everything to zero u_init = np.zeros([N, N], dtype = np.float32) ut_init = np.zeros([N, N], dtype = np.float32) # Some rain drops hit a pond at random points for n in range(100): a,b = np.random.randint(0, N, 2) u_init[a,b] = np.random.uniform() plt.imshow(u_init) plt.show() # Parameters: # eps — time resolution # damping — wave damping eps = tf.placeholder(tf.float32, shape = ()) damping = tf.placeholder(tf.float32, shape = ()) # Create variables for simulation state U = tf.Variable(u_init) Ut = tf.Variable(ut_init) # Discretized PDE update rules U_ = U + eps * Ut Ut_ = Ut + eps * (laplace(U) – damping * Ut) # Operation to update the state step = tf.group(U.assign(U_), Ut.assign(Ut_)) # Initialize state to initial conditions tf.initialize_all_variables().run() # Run 1000 steps of PDE for i in range(1000): # Step simulation step.run({eps: 0.03, damping: 0.04}) # Visualize every 50 steps if i % 500 == 0: plt.imshow(U.eval()) plt.show() The graphs are plotted as shown below −

Learn CNN and RNN Difference work project make money

TensorFlow – CNN And RNN Difference In this chapter, we will focus on the difference between CNN and RNN − CNN RNN It is suitable for spatial data such as images. RNN is suitable for temporal data, also called sequential data. CNN is considered to be more powerful than RNN. RNN includes less feature compatibility when compared to CNN. This network takes fixed size inputs and generates fixed size outputs. RNN can handle arbitrary input/output lengths. CNN is a type of feed-forward artificial neural network with variations of multilayer perceptrons designed to use minimal amounts of preprocessing. RNN unlike feed forward neural networks – can use their internal memory to process arbitrary sequences of inputs. CNNs use connectivity pattern between the neurons. This is inspired by the organization of the animal visual cortex, whose individual neurons are arranged in such a way that they respond to overlapping regions tiling the visual field. Recurrent neural networks use time-series information – what a user spoke last will impact what he/she will speak next. CNNs are ideal for images and video processing. RNNs are ideal for text and speech analysis. Following illustration shows the schematic representation of CNN and RNN −

Learn TensorFlow – XOR Implementation work project make money

TensorFlow – XOR Implementation In this chapter, we will learn about the XOR implementation using TensorFlow. Before starting with XOR implementation in TensorFlow, let us see the XOR table values. This will help us understand encryption and decryption process. A B A XOR B 0 0 0 0 1 1 1 0 1 1 1 0 XOR Cipher encryption method is basically used to encrypt data which is hard to crack with brute force method, i.e., by generating random encryption keys which match the appropriate key. The concept of implementation with XOR Cipher is to define a XOR encryption key and then perform XOR operation of the characters in the specified string with this key, which a user tries to encrypt. Now we will focus on XOR implementation using TensorFlow, which is mentioned below − #Declaring necessary modules import tensorflow as tf import numpy as np “”” A simple numpy implementation of a XOR gate to understand the backpropagation algorithm “”” x = tf.placeholder(tf.float64,shape = [4,2],name = “x”) #declaring a place holder for input x y = tf.placeholder(tf.float64,shape = [4,1],name = “y”) #declaring a place holder for desired output y m = np.shape(x)[0]#number of training examples n = np.shape(x)[1]#number of features hidden_s = 2 #number of nodes in the hidden layer l_r = 1#learning rate initialization theta1 = tf.cast(tf.Variable(tf.random_normal([3,hidden_s]),name = “theta1”),tf.float64) theta2 = tf.cast(tf.Variable(tf.random_normal([hidden_s+1,1]),name = “theta2”),tf.float64) #conducting forward propagation a1 = tf.concat([np.c_[np.ones(x.shape[0])],x],1) #the weights of the first layer are multiplied by the input of the first layer z1 = tf.matmul(a1,theta1) #the input of the second layer is the output of the first layer, passed through the activation function and column of biases is added a2 = tf.concat([np.c_[np.ones(x.shape[0])],tf.sigmoid(z1)],1) #the input of the second layer is multiplied by the weights z3 = tf.matmul(a2,theta2) #the output is passed through the activation function to obtain the final probability h3 = tf.sigmoid(z3) cost_func = -tf.reduce_sum(y*tf.log(h3)+(1-y)*tf.log(1-h3),axis = 1) #built in tensorflow optimizer that conducts gradient descent using specified learning rate to obtain theta values optimiser = tf.train.GradientDescentOptimizer(learning_rate = l_r).minimize(cost_func) #setting required X and Y values to perform XOR operation X = [[0,0],[0,1],[1,0],[1,1]] Y = [[0],[1],[1],[0]] #initializing all variables, creating a session and running a tensorflow session init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) #running gradient descent for each iteration and printing the hypothesis obtained using the updated theta values for i in range(100000): sess.run(optimiser, feed_dict = {x:X,y:Y})#setting place holder values using feed_dict if i%100==0: print(“Epoch:”,i) print(“Hyp:”,sess.run(h3,feed_dict = {x:X,y:Y})) The above line of code generates an output as shown in the screenshot below −

Learn Machine Learning – Mathematics work project make money

Machine Learning – Mathematics Machine learning is an interdisciplinary field that involves computer science, statistics, and mathematics. In particular, mathematics plays a critical role in developing and understanding machine learning algorithms. In this article, we will discuss the mathematical concepts that are essential for machine learning, including linear algebra, calculus, probability, and statistics. Linear Algebra Linear algebra is the branch of mathematics that deals with linear equations and their representation in vector spaces. In machine learning, linear algebra is used to represent and manipulate data. In particular, vectors and matrices are used to represent and manipulate data points, features, and weights in machine learning models. A vector is an ordered list of numbers, while a matrix is a rectangular array of numbers. For example, a vector can represent a single data point, and a matrix can represent a dataset. Linear algebra operations, such as matrix multiplication and inversion, can be used to transform and analyze data. Calculus Calculus is the branch of mathematics that deals with rates of change and accumulation. In machine learning, calculus is used to optimize models by finding the minimum or maximum of a function. In particular, gradient descent, a widely used optimization algorithm, is based on calculus. Gradient descent is an iterative optimization algorithm that updates the weights of a model based on the gradient of the loss function. The gradient is the vector of partial derivatives of the loss function with respect to each weight. By iteratively updating the weights in the direction of the negative gradient, gradient descent tries to minimize the loss function. Probability Probability is the branch of mathematics that deals with uncertainty and randomness. In machine learning, probability is used to model and analyze data that are uncertain or variable. In particular, probability distributions, such as Gaussian and Poisson distributions, are used to model the probability of data points or events. Bayesian inference, a probabilistic modeling technique, is also widely used in machine learning. Bayesian inference is based on Bayes” theorem, which states that the probability of a hypothesis given the data is proportional to the probability of the data given the hypothesis multiplied by the prior probability of the hypothesis. By updating the prior probability based on the observed data, Bayesian inference can make probabilistic predictions or classifications. Statistics Statistics is the branch of mathematics that deals with the collection, analysis, interpretation, and presentation of data. In machine learning, statistics is used to evaluate and compare models, estimate model parameters, and test hypotheses. For example, cross-validation is a statistical technique that is used to evaluate the performance of a model on new, unseen data. In cross-validation, the dataset is split into multiple subsets, and the model is trained and evaluated on each subset. This allows us to estimate the performance of the model on new data and compare different models.

Learn Machine Learning – Challenges & Common Issues work project make money

Machine Learning – Challenges & Common Issues Machine learning is a rapidly growing field with many promising applications. However, there are also several challenges and issues that must be addressed to fully realize the potential of machine learning. Some of the major challenges and common issues faced in machine learning include − Overfitting Overfitting occurs when a model is trained on a limited set of data and becomes too complex, leading to poor performance when tested on new data. This can be addressed by using techniques such as cross-validation, regularization, and early stopping. Underfitting Underfitting occurs when a model is too simple and fails to capture the patterns in the data. This can be addressed by using more complex models or by adding more features to the data. Data Quality Issues Machine learning models are only as good as the data they are trained on. Poor quality data can lead to inaccurate models. Data quality issues include missing values, incorrect values, and outliers. Imbalanced Datasets Imbalanced datasets occur when one class of data is significantly more prevalent than another. This can lead to biased models that are accurate for the majority class but perform poorly on the minority class. Model Interpretability Machine learning models can be very complex, making it difficult to understand how they arrive at their predictions. This can be a challenge when explaining the model to stakeholders or regulatory bodies. Techniques such as feature importance and partial dependence plots can help improve model interpretability. Generalization Machine learning models are trained on a specific dataset, and they may not perform well on new data that is outside the training set. This can be addressed by using techniques such as cross-validation and regularization. Scalability Machine learning models can be computationally expensive and may not scale well to large datasets. Techniques such as distributed computing, parallel processing, and sampling can help address scalability issues. Ethical Considerations Machine learning models can raise ethical concerns when they are used to make decisions that affect people”s lives. These concerns include bias, privacy, and transparency. Techniques such as fairness metrics and explainable AI can help address ethical considerations. Addressing these issues requires a combination of technical expertise and business knowledge, as well as an understanding of ethical considerations. By addressing these issues, machine learning can be used to develop accurate and reliable models that can provide valuable insights and drive business value.

Learn Theano – Expression for Matrix Multiplication work project make money

Theano – Expression for Matrix Multiplication We will compute a dot product of two matrices. The first matrix is of dimension 2 x 3 and the second one is of dimension 3 x 2. The matrices that we used as input and their product are expressed here − $$begin{bmatrix}0 & -1 & 2\4 & 11 & 2end{bmatrix} :begin{bmatrix}3& -1 \1 & 2 \35 & 20 end{bmatrix}=begin{bmatrix}11 & 0 \35 & 20 end{bmatrix}$$ Declaring Variables To write a Theano expression for the above, we first declare two variables to represent our matrices as follows − a = tensor.dmatrix() b = tensor.dmatrix() The dmatrix is the Type of matrices for doubles. Note that we do not specify the matrix size anywhere. Thus, these variables can represent matrices of any dimension. Defining Expression To compute the dot product, we used the built-in function called dot as follows − c = tensor.dot(a,b) The output of multiplication is assigned to a matrix variable called c. Defining Theano Function Next, we define a function as in the earlier example to evaluate the expression. f = theano.function([a,b], c) Note that the input to the function are two variables a and b which are of matrix type. The function output is assigned to variable c which would automatically be of matrix type. Invoking Theano Function We now invoke the function using the following statement − d = f([[0, -1, 2], [4, 11, 2]], [[3, -1],[1,2], [6,1]]) The two variables in the above statement are NumPy arrays. You may explicitly define NumPy arrays as shown here − f(numpy.array([[0, -1, 2], [4, 11, 2]]), numpy.array([[3, -1],[1,2], [6,1]])) After d is computed we print its value − print (d) You will see the following output on the output − [[11. 0.] [25. 20.]] Full Program Listing The complete program listing is given here: from theano import * a = tensor.dmatrix() b = tensor.dmatrix() c = tensor.dot(a,b) f = theano.function([a,b], c) d = f([[0, -1, 2],[4, 11, 2]], [[3, -1],[1,2],[6,1]]) print (d) The screenshot of the program execution is shown here −

Learn Theano – Data Types work project make money

Theano – Data Types Now, that you have understood the basics of Theano, let us begin with the different data types available to you for creating your expressions. The following table gives you a partial list of data types defined in Theano. Data type Theano type Byte bscalar, bvector, bmatrix, brow, bcol, btensor3, btensor4, btensor5, btensor6, btensor7 16-bit integers wscalar, wvector, wmatrix, wrow, wcol, wtensor3, wtensor4, wtensor5, wtensor6, wtensor7 32-bit integers iscalar, ivector, imatrix, irow, icol, itensor3, itensor4, itensor5, itensor6, itensor7 64-bit integers lscalar, lvector, lmatrix, lrow, lcol, ltensor3, ltensor4, ltensor5, ltensor6, ltensor7 float fscalar, fvector, fmatrix, frow, fcol, ftensor3, ftensor4, ftensor5, ftensor6, ftensor7 double dscalar, dvector, dmatrix, drow, dcol, dtensor3, dtensor4, dtensor5, dtensor6, dtensor7 complex cscalar, cvector, cmatrix, crow, ccol, ctensor3, ctensor4, ctensor5, ctensor6, ctensor7 The above list is not exhaustive and the reader is referred to the tensor creation document for a complete list. I will now give you a few examples of how to create variables of various kinds of data in Theano. Scalar To construct a scalar variable you would use the syntax − Syntax x = theano.tensor.scalar (”x”) x = 5.0 print (x) Output 5.0 One-dimensional Array To create a one dimensional array, use the following declaration − Example f = theano.tensor.vector f = (2.0, 5.0, 3.0) print (f)f = theano.tensor.vector f = (2.0, 5.0, 3.0) print (f) print (f[0]) print (f[2]) Output (2.0, 5.0, 3.0) 2.0 3.0 If you do f[3] it would generate an index out of range error as shown here − print f([3]) Output IndexError Traceback (most recent call last) <ipython-input-13-2a9c2a643c3a> in <module> 4 print (f[0]) 5 print (f[2]) —-> 6 print (f[3]) IndexError: tuple index out of range Two-dimensional Array To declare a two-dimensional array you would use the following code snippet − Example m = theano.tensor.matrix m = ([2,3], [4,5], [2,4]) print (m[0]) print (m[1][0]) Output [2, 3] 4 5-dimensional Array To declare a 5-dimensional array, use the following syntax − Example m5 = theano.tensor.tensor5 m5 = ([0,1,2,3,4], [5,6,7,8,9], [10,11,12,13,14]) print (m5[1]) print (m5[2][3]) Output [5, 6, 7, 8, 9] 13 You may declare a 3-dimensional array by using the data type tensor3 in place of tensor5, a 4-dimensional array using the data type tensor4, and so on up to tensor7. Plural Constructors Sometimes, you may want to create variables of the same type in a single declaration. You can do so by using the following syntax − Syntax from theano.tensor import * x, y, z = dmatrices(”x”, ”y”, ”z”) x = ([1,2],[3,4],[5,6]) y = ([7,8],[9,10],[11,12]) z = ([13,14],[15,16],[17,18]) print (x[2]) print (y[1]) print (z[0]) Output [5, 6] [9, 10] [13, 14]

Learn TensorFlow – Basics work project make money

TensorFlow – Basics In this chapter, we will learn about the basics of TensorFlow. We will begin by understanding the data structure of tensor. Tensor Data Structure Tensors are used as the basic data structures in TensorFlow language. Tensors represent the connecting edges in any flow diagram called the Data Flow Graph. Tensors are defined as multidimensional array or list. Tensors are identified by the following three parameters − Rank Unit of dimensionality described within tensor is called rank. It identifies the number of dimensions of the tensor. A rank of a tensor can be described as the order or n-dimensions of a tensor defined. Shape The number of rows and columns together define the shape of Tensor. Type Type describes the data type assigned to Tensor’s elements. A user needs to consider the following activities for building a Tensor − Build an n-dimensional array Convert the n-dimensional array. Various Dimensions of TensorFlow TensorFlow includes various dimensions. The dimensions are described in brief below − One dimensional Tensor One dimensional tensor is a normal array structure which includes one set of values of the same data type. Declaration >>> import numpy as np >>> tensor_1d = np.array([1.3, 1, 4.0, 23.99]) >>> print tensor_1d The implementation with the output is shown in the screenshot below − The indexing of elements is same as Python lists. The first element starts with index of 0; to print the values through index, all you need to do is mention the index number. >>> print tensor_1d[0] 1.3 >>> print tensor_1d[2] 4.0 Two dimensional Tensors Sequence of arrays are used for creating “two dimensional tensors”. The creation of two-dimensional tensors is described below − Following is the complete syntax for creating two dimensional arrays − >>> import numpy as np >>> tensor_2d = np.array([(1,2,3,4),(4,5,6,7),(8,9,10,11),(12,13,14,15)]) >>> print(tensor_2d) [[ 1 2 3 4] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] >>> The specific elements of two dimensional tensors can be tracked with the help of row number and column number specified as index numbers. >>> tensor_2d[3][2] 14 Tensor Handling and Manipulations In this section, we will learn about Tensor Handling and Manipulations. To begin with, let us consider the following code − import tensorflow as tf import numpy as np matrix1 = np.array([(2,2,2),(2,2,2),(2,2,2)],dtype = ”int32”) matrix2 = np.array([(1,1,1),(1,1,1),(1,1,1)],dtype = ”int32”) print (matrix1) print (matrix2) matrix1 = tf.constant(matrix1) matrix2 = tf.constant(matrix2) matrix_product = tf.matmul(matrix1, matrix2) matrix_sum = tf.add(matrix1,matrix2) matrix_3 = np.array([(2,7,2),(1,4,2),(9,0,2)],dtype = ”float32”) print (matrix_3) matrix_det = tf.matrix_determinant(matrix_3) with tf.Session() as sess: result1 = sess.run(matrix_product) result2 = sess.run(matrix_sum) result3 = sess.run(matrix_det) print (result1) print (result2) print (result3) Output The above code will generate the following output − Explanation We have created multidimensional arrays in the above source code. Now, it is important to understand that we created graph and sessions, which manage the Tensors and generate the appropriate output. With the help of graph, we have the output specifying the mathematical calculations between Tensors.

Learn Single Layer Perceptron work project make money

TensorFlow – Single Layer Perceptron For understanding single layer perceptron, it is important to understand Artificial Neural Networks (ANN). Artificial neural networks is the information processing system the mechanism of which is inspired with the functionality of biological neural circuits. An artificial neural network possesses many processing units connected to each other. Following is the schematic representation of artificial neural network − The diagram shows that the hidden units communicate with the external layer. While the input and output units communicate only through the hidden layer of the network. The pattern of connection with nodes, the total number of layers and level of nodes between inputs and outputs with the number of neurons per layer define the architecture of a neural network. There are two types of architecture. These types focus on the functionality artificial neural networks as follows − Single Layer Perceptron Multi-Layer Perceptron Single Layer Perceptron Single layer perceptron is the first proposed neural model created. The content of the local memory of the neuron consists of a vector of weights. The computation of a single layer perceptron is performed over the calculation of sum of the input vector each with the value multiplied by corresponding element of vector of the weights. The value which is displayed in the output will be the input of an activation function. Let us focus on the implementation of single layer perceptron for an image classification problem using TensorFlow. The best example to illustrate the single layer perceptron is through representation of “Logistic Regression”. Now, let us consider the following basic steps of training logistic regression − The weights are initialized with random values at the beginning of the training. For each element of the training set, the error is calculated with the difference between desired output and the actual output. The error calculated is used to adjust the weights. The process is repeated until the error made on the entire training set is not less than the specified threshold, until the maximum number of iterations is reached. The complete code for evaluation of logistic regression is mentioned below − # Import MINST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets(“/tmp/data/”, one_hot = True) import tensorflow as tf import matplotlib.pyplot as plt # Parameters learning_rate = 0.01 training_epochs = 25 batch_size = 100 display_step = 1 # tf Graph Input x = tf.placeholder(“float”, [None, 784]) # mnist data image of shape 28*28 = 784 y = tf.placeholder(“float”, [None, 10]) # 0-9 digits recognition => 10 classes # Create model # Set model weights W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) # Construct model activation = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax # Minimize error using cross entropy cross_entropy = y*tf.log(activation) cost = tf.reduce_mean (-tf.reduce_sum (cross_entropy,reduction_indices = 1)) optimizer = tf.train. GradientDescentOptimizer(learning_rate).minimize(cost) #Plot settings avg_set = [] epoch_set = [] # Initializing the variables init = tf.initialize_all_variables() # Launch the graph with tf.Session() as sess: sess.run(init) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Fit training using batch data sess.run(optimizer, feed_dict = { x: batch_xs, y: batch_ys}) # Compute average loss avg_cost += sess.run(cost, feed_dict = { x: batch_xs, y: batch_ys})/total_batch # Display logs per epoch step if epoch % display_step == 0: print (“Epoch:”, ”%04d” % (epoch+1), “cost=”, “{:.9f}”.format(avg_cost)) avg_set.append(avg_cost) epoch_set.append(epoch+1) print (“Training phase finished”) plt.plot(epoch_set,avg_set, ”o”, label = ”Logistic Regression Training phase”) plt.ylabel(”cost”) plt.xlabel(”epoch”) plt.legend() plt.show() # Test model correct_prediction = tf.equal(tf.argmax(activation, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, “float”)) print (“Model accuracy:”, accuracy.eval({x: mnist.test.images, y: mnist.test.labels})) Output The above code generates the following output − The logistic regression is considered as a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal or independent variables.