Learn PyBrain – Testing Network work project make money

PyBrain – Testing Network In this chapter, we are going to see some example where we are going to train the data and test the errors on the trained data. We are going to make use of trainers − BackpropTrainer BackpropTrainer is trainer that trains the parameters of a module according to a supervised or ClassificationDataSet dataset (potentially sequential) by backpropagating the errors (through time). TrainUntilConvergence It is used to train the module on the dataset until it converges. When we create a neural network, it will get trained based on the training data given to it.Now whether the network is trained properly or not will depend on prediction of test data tested on that network. Let us see a working example step by step which where will build a neural network and predict the training errors, test errors and validation errors. Testing our Network Following are the steps we will follow for testing our Network − Importing required PyBrain and other packages Create ClassificationDataSet Splitting the datasets 25% as testdata and 75% as trained data Converting Testdata and Trained data back as ClassificationDataSet Creating a Neural Network Training the Network Visualizing the error and validation data Percentage for test data Error Step 1 Importing required PyBrain and other packages. The packages that we need are imported as shown below − from sklearn import datasets import matplotlib.pyplot as plt from pybrain.datasets import ClassificationDataSet from pybrain.utilities import percentError from pybrain.tools.shortcuts import buildNetwork from pybrain.supervised.trainers import BackpropTrainer from pybrain.structure.modules import SoftmaxLayer from numpy import ravel Step 2 The next step is to create ClassificationDataSet. For Datasets, we are going to use datasets from sklearn datasets as shown below − Refer load_digits datasets from sklearn in the below link − digits = datasets.load_digits() X, y = digits.data, digits.target ds = ClassificationDataSet(64, 1, nb_classes=10) # we are having inputs are 64 dim array and since the digits are from 0-9 the classes considered is 10. for i in range(len(X)): ds.addSample(ravel(X[i]), y[i]) # adding sample to datasets Step 3 Splitting the datasets 25% as testdata and 75% as trained data − test_data_temp, training_data_temp = ds.splitWithProportion(0.25) So here, we have used a method on dataset called splitWithProportion() with value 0.25, it will split the dataset into 25% as test data and 75% as training data. Step 4 Converting Testdata and Trained data back as ClassificationDataSet. test_data = ClassificationDataSet(64, 1, nb_classes=10) for n in range(0, test_data_temp.getLength()): test_data.addSample( test_data_temp.getSample(n)[0], test_data_temp.getSample(n)[1] ) training_data = ClassificationDataSet(64, 1, nb_classes=10) for n in range(0, training_data_temp.getLength()): training_data.addSample( training_data_temp.getSample(n)[0], training_data_temp.getSample(n)[1] ) test_data._convertToOneOfMany() training_data._convertToOneOfMany() Using splitWithProportion() method on dataset converts the dataset to superviseddataset, so we will convert the dataset back to classificationdataset as shown in above step. Step 5 Next step is creating a Neural Network. net = buildNetwork(training_data.indim, 64, training_data.outdim, outclass=SoftmaxLayer) We are creating a network wherein the input and output are used from the training data. Step 6 Training the Network Now the important part is training the network on the dataset as shown below − trainer = BackpropTrainer(net, dataset=training_data, momentum=0.1,learningrate=0.01,verbose=True,weightdecay=0.01) We are using BackpropTrainer() method and using dataset on the network created. Step 7 The next step is visualizing the error and validation of the data. trnerr,valerr = trainer.trainUntilConvergence(dataset=training_data,maxEpochs=10) plt.plot(trnerr,”b”,valerr,”r”) plt.show() We will use a method called trainUntilConvergence on training data that will converge for epochs of 10. It will return training error and validation error which we have plotted as shown below. The blue line shows the training errors and red line shows the validation error. Total error received during execution of the above code is shown below − Total error: 0.0432857814358 Total error: 0.0222276374185 Total error: 0.0149012052174 Total error: 0.011876985318 Total error: 0.00939854792853 Total error: 0.00782202445183 Total error: 0.00714707652044 Total error: 0.00606068893793 Total error: 0.00544257958975 Total error: 0.00463929281336 Total error: 0.00441275665294 (”train-errors:”, ”[0.043286 , 0.022228 , 0.014901 , 0.011877 , 0.009399 , 0.007 822 , 0.007147 , 0.006061 , 0.005443 , 0.004639 , 0.004413 ]”) (”valid-errors:”, ”[0.074296 , 0.027332 , 0.016461 , 0.014298 , 0.012129 , 0.009 248 , 0.008922 , 0.007917 , 0.006547 , 0.005883 , 0.006572 , 0.005811 ]”) The error starts at 0.04 and later goes down for each epoch, which means the network is getting trained and gets better for each epoch. Step 8 Percentage for test data error We can check the percent error using percentError method as shown below − print(”Percent Error on testData:”,percentError(trainer.testOnClassData(dataset=test_data), test_data[”class”])) Percent Error on testData − 3.34075723830735 We are getting the error percent, i.e., 3.34%, which means the neural network is 97% accurate. Below is the full code − from sklearn import datasets import matplotlib.pyplot as plt from pybrain.datasets import ClassificationDataSet from pybrain.utilities import percentError from pybrain.tools.shortcuts import buildNetwork from pybrain.supervised.trainers import BackpropTrainer from pybrain.structure.modules import SoftmaxLayer from numpy import ravel digits = datasets.load_digits() X, y = digits.data, digits.target ds = ClassificationDataSet(64, 1, nb_classes=10) for i in range(len(X)): ds.addSample(ravel(X[i]), y[i]) test_data_temp, training_data_temp = ds.splitWithProportion(0.25) test_data = ClassificationDataSet(64, 1, nb_classes=10) for n in range(0, test_data_temp.getLength()): test_data.addSample( test_data_temp.getSample(n)[0], test_data_temp.getSample(n)[1] ) training_data = ClassificationDataSet(64, 1, nb_classes=10) for n in range(0, training_data_temp.getLength()): training_data.addSample( training_data_temp.getSample(n)[0], training_data_temp.getSample(n)[1] ) test_data._convertToOneOfMany() training_data._convertToOneOfMany() net = buildNetwork(training_data.indim, 64, training_data.outdim, outclass=SoftmaxLayer) trainer = BackpropTrainer( net, dataset=training_data, momentum=0.1, learningrate=0.01,verbose=True,weightdecay=0.01 ) trnerr,valerr = trainer.trainUntilConvergence(dataset=training_data,maxEpochs=10) plt.plot(trnerr,”b”,valerr,”r”) plt.show() trainer.trainEpochs(10) print(”Percent Error on testData:”,percentError( trainer.testOnClassData(dataset=test_data), test_data[”class”] ))

Learn PyBrain – Working with Recurrent Networks work project make money

PyBrain – Working with Recurrent Networks Recurrent Networks is same as feed-forward network with only difference that you need to remember the data at each step.The history of each step has to be saved. We will learn how to − Create a Recurrent Network Adding Modules and Connection Creating a Recurrent Network To create recurrent network, we will use RecurrentNetwork class as shown below − rn.py from pybrain.structure import RecurrentNetwork recurrentn = RecurrentNetwork() print(recurrentn) python rn.py C:pybrainpybrainsrc>python rn.py RecurrentNetwork-0 Modules: [] Connections: [] Recurrent Connections: [] We can see a new connection called Recurrent Connections for the recurrent network. Right now there is no data available. Let us now create the layers and add to modules and create connections. Adding Modules and Connection We are going to create layers, i.e., input, hidden and output. The layers will be added to the input and output module. Next, we will create the connection for input to hidden, hidden to output and a recurrent connection between hidden to hidden. Here is the code for the Recurrent network with modules and connections. rn.py from pybrain.structure import RecurrentNetwork from pybrain.structure import LinearLayer, SigmoidLayer from pybrain.structure import FullConnection recurrentn = RecurrentNetwork() #creating layer for input => 2 , hidden=> 3 and output=>1 inputLayer = LinearLayer(2, ”rn_in”) hiddenLayer = SigmoidLayer(3, ”rn_hidden”) outputLayer = LinearLayer(1, ”rn_output”) #adding the layer to feedforward network recurrentn.addInputModule(inputLayer) recurrentn.addModule(hiddenLayer) recurrentn.addOutputModule(outputLayer) #Create connection between input ,hidden and output input_to_hidden = FullConnection(inputLayer, hiddenLayer) hidden_to_output = FullConnection(hiddenLayer, outputLayer) hidden_to_hidden = FullConnection(hiddenLayer, hiddenLayer) #add connection to the network recurrentn.addConnection(input_to_hidden) recurrentn.addConnection(hidden_to_output) recurrentn.addRecurrentConnection(hidden_to_hidden) recurrentn.sortModules() print(recurrentn) python rn.py C:pybrainpybrainsrc>python rn.py RecurrentNetwork-6 Modules: [<LinearLayer ”rn_in”>, <SigmoidLayer ”rn_hidden”>, <LinearLayer ”rn_output”>] Connections: [<FullConnection ”FullConnection-4”: ”rn_hidden” -> ”rn_output”>, <FullConnection ”FullConnection-5”: ”rn_in” -> ”rn_hidden”>] Recurrent Connections: [<FullConnection ”FullConnection-3”: ”rn_hidden” -> ”rn_hidden”>] In above ouput we can see the Modules, Connections and Recurrent Connections. Let us now activate the network using activate method as shown below − rn.py Add below code to the one created earlier − #activate network using activate() method act1 = recurrentn.activate((2, 2)) print(act1) act2 = recurrentn.activate((2, 2)) print(act2) python rn.py C:pybrainpybrainsrc>python rn.py [-1.24317586] [-0.54117783]

Learn PyBrain – Reinforcement Learning Module work project make money

PyBrain – Reinforcement Learning Module Reinforcement Learning (RL) is an important part in Machine Learning. Reinforcement learning makes the agent learn its behaviour based on inputs from the environment. The components that interact with each other during Reinforcement are as follows − Environment Agent Task Experiment The layout of Reinforcement Learning is given below − In RL, the agent talks with the environment in iteration. At each iteration, the agent receives an observation which has the reward. It then chooses the action and sends to the environment. The environment at each iteration moves to a new state and the reward received each time is saved. The goal of RL agent is to collect as many rewards as possible. In between the iteration the agent”s performance is compared with that of the agent that acts in a good way and the difference in performance gives rise to either reward or failure. RL is basically used in problem solving tasks like robot control, elevator, telecommunications, games etc. Let us take a look at how to work with RL in Pybrain. We are going to work on maze environment which will be represented using 2 dimensional numpy array where 1 is a wall and 0 is a free field. The agent”s responsibility is to move over the free field and find the goal point. Here is a step by step flow of working with maze environment. Step 1 Import the packages we need with the below code − from scipy import * import sys, time import matplotlib.pyplot as pylab # for visualization we are using mathplotlib from pybrain.rl.environments.mazes import Maze, MDPMazeTask from pybrain.rl.learners.valuebased import ActionValueTable from pybrain.rl.agents import LearningAgent from pybrain.rl.learners import Q, QLambda, SARSA #@UnusedImport from pybrain.rl.explorers import BoltzmannExplorer #@UnusedImport from pybrain.rl.experiments import Experiment from pybrain.rl.environments import Task Step 2 Create the maze environment using the below code − # create the maze with walls as 1 and 0 is a free field mazearray = array( [[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 0, 1], [1, 0, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 1, 0, 1, 1, 0, 1], [1, 0, 0, 0, 0, 0, 1, 0, 1], [1, 1, 1, 1, 1, 1, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1]] ) env = Maze(mazearray, (7, 7)) # create the environment, the first parameter is the maze array and second one is the goal field tuple Step 3 The next step is to create Agent. Agent plays an important role in RL. It will interact with the maze environment using getAction() and integrateObservation() methods. The agent has a controller (which will map the states to actions) and a learner. The controller in PyBrain is like a module, for which the input is states and convert them into actions. controller = ActionValueTable(81, 4) controller.initialize(1.) The ActionValueTable needs 2 inputs, i.e., the number of states and actions. The standard maze environment has 4 actions: north, south, east, west. Now we will create a learner. We are going to use SARSA() learning algorithm for the learner to be used with the agent. learner = SARSA() agent = LearningAgent(controller, learner) Step 4 This step is adding Agent to Environment. To connect the agent to environment, we need a special component called task. The role of a task is to look for the goal in the environment and how the agent gets rewards for actions. The environment has its own task. The Maze environment that we have used has MDPMazeTask task. MDP stands for “markov decision process” which means, the agent knows its position in the maze. The environment will be a parameter to the task. task = MDPMazeTask(env) Step 5 The next step after adding agent to environment is to create an Experiment. Now we need to create the experiment, so that we can have the task and the agent co-ordinate with each other. experiment = Experiment(task, agent) Now we are going to run the experiment 1000 times as shown below − for i in range(1000): experiment.doInteractions(100) agent.learn() agent.reset() The environment will run for 100 times between the agent and task when the following code gets executed − experiment.doInteractions(100) After each iteration, it gives back a new state to the task which decides what information and reward should be passed to the agent. We are going to plot a new table after learning and resetting the agent inside the for loop. for i in range(1000): experiment.doInteractions(100) agent.learn() agent.reset() pylab.pcolor(table.params.reshape(81,4).max(1).reshape(9,9)) pylab.savefig(“test.png”) Here is the full code − Example maze.py from scipy import * import sys, time import matplotlib.pyplot as pylab from pybrain.rl.environments.mazes import Maze, MDPMazeTask from pybrain.rl.learners.valuebased import ActionValueTable from pybrain.rl.agents import LearningAgent from pybrain.rl.learners import Q, QLambda, SARSA #@UnusedImport from pybrain.rl.explorers import BoltzmannExplorer #@UnusedImport from pybrain.rl.experiments import Experiment from pybrain.rl.environments import Task # create maze array mazearray = array( [[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 1, 0, 0, 0, 0, 1], [1, 0, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 1, 0, 1, 1, 0, 1], [1, 0, 0, 0, 0, 0, 1, 0, 1], [1, 1, 1, 1, 1, 1, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1]] ) env = Maze(mazearray, (7, 7)) # create task task = MDPMazeTask(env) #controller in PyBrain is like a module, for which the input is states and convert them into actions. controller = ActionValueTable(81, 4) controller.initialize(1.) # create agent with controller and learner – using SARSA() learner = SARSA() # create agent agent = LearningAgent(controller, learner) # create experiment experiment = Experiment(task, agent) # prepare plotting pylab.gray() pylab.ion() for i in range(1000): experiment.doInteractions(100) agent.learn() agent.reset() pylab.pcolor(controller.params.reshape(81,4).max(1).reshape(9,9)) pylab.savefig(“test.png”) Output python maze.py The color in the free field will be changed at

Learn PyBrain – Environment Setup work project make money

PyBrain – Environment Setup In this chapter, we will work on the installation of PyBrain. To start working with PyBrain, we need to install Python first. So we are going to work on following − Install Python Install PyBrain Installing Python To install Python, go to the Python official site: as shown below and click on the latest version available for windows, Linux/Unix and macOS. Download Python as per your 64- or 32-bit OS available with you. Once you have downloaded, click on the .exe file and follow the steps to install python on your system. The python package manager, i.e., pip will also get installed by default with the above installation. To make it work globally on your system, directly add the location of python to the PATH variable, the same is shown at the start of the installation to remember to check the checkbox which says ADD to PATH. In case you forget to check it please follow the below given steps to add to PATH. Add to PATH To add to PATH, follow the below steps − Right-click on your Computer icon and click on properties -> Advanced System Settings. It will display the screen as shown below Click on Environment Variables as shown above. It will display the screen as shown below Select Path and click on Edit button, add the location path of your python at the end. Now let us check the python version. Checking for Python version The below code helps us in checking the version of Python − E:pybrain>python –version Python 3.7.3 Installing PyBrain Now that we have installed Python, we are going to install Pybrain. Clone the pybrain repository as shown below − git clone git://github.com/pybrain/pybrain.git C:pybrain>git clone git://github.com/pybrain/pybrain.git Cloning into ”pybrain”… remote: Enumerating objects: 2, done. remote: Counting objects: 100% (2/2), done. remote: Compressing objects: 100% (2/2), done. remote: Total 12177 (delta 0), reused 0 (delta 0), pack-reused 12175 Receiving objects: 100% (12177/12177), 13.29 MiB | 510.00 KiB/s, done. Resolving deltas: 100% (8506/8506), done. Now perform cd pybrain and run following command − python setup.py install This command will install pybrain on your system. Once done, to check if pybrain is installed or not, open command line prompt and start the python interpreter as shown below − C:pybrainpybrain>python Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD64)] on win32 Type “help”, “copyright”, “credits” or “license” for more information. >>> We can add import pybrain using the below code − >>> import pybrain >>> If the import pybrain works without any errors, it means pybrain in installed successfully. You can now write your code to start working with pybrain.

Learn PyBrain – Working with Datasets work project make money

PyBrain – Working with Datasets Datasets is an input data to be given to test, validate and train networks. The type of dataset to be used depends on the tasks that we are going to do with Machine Learning. In this chapter, we are going to take a look at the following − Creating Dataset Adding Data to Dataset We will first learn how to create a Dataset and test the dataset with the input given. Creating Dataset To create a dataset we need to use the pybrain dataset package: pybrain.datasets. Pybrain supports datasets classes like SupervisedDataset, SequentialDataset, ClassificationDataSet. We are going to make use of SupervisedDataset , to create our dataset.The dataset to be used depends on the machine learning task that user is trying to implement.SupervisedDataset is the simplest one and we are going to use the same over here. A SupervisedDataset dataset needs params input and target. Consider an XOR truth table, as shown below − A B A XOR B 0 0 0 0 1 1 1 0 1 1 1 0 The inputs that are given are like a 2-dimensional array and we get 1 output. So here the input becomes the size and the target it the output which is 1. So the inputs that will go for our dataset will 2,1. createdataset.py from pybrain.datasets import SupervisedDataSet sds = SupervisedDataSet(2, 1) print(sds) This is what we get when we execute above code python createdataset.py − C:pybrainpybrainsrc>python createdataset.py input: dim(0, 2) [] target: dim(0, 1) [] It displays the input of size 2 and target of size 1 as shown above. Adding Data to Dataset Let us now add the sample data to the dataset. createdataset.py from pybrain.datasets import SupervisedDataSet sds = SupervisedDataSet(2, 1) xorModel = [ [(0,0), (0,)], [(0,1), (1,)], [(1,0), (1,)], [(1,1), (0,)], ] for input, target in xorModel: sds.addSample(input, target) print(“Input is:”) print(sds[”input”]) print(“nTarget is:”) print(sds[”target”]) We have created a XORModel array as shown below − xorModel = [ [(0,0), (0,)], [(0,1), (1,)], [(1,0), (1,)], [(1,1), (0,)], ] To add data to the dataset, we are using addSample() method which takes in input and target. To add data to the addSample, we will loop through xorModel array as shown below − for input, target in xorModel: sds.addSample(input, target) After executing, the following is the output we get − python createdataset.py C:pybrainpybrainsrc>python createdataset.py Input is: [[0. 0.] [0. 1.] [1. 0.] [1. 1.]] Target is: [[0.] [1.] [1.] [0.]] You can get the input and target details from the dataset created by simply using the input and target index as shown below − print(sds[”input”]) print(sds[‘target’])

Learn Working with Feed-Forward Networks work project make money

PyBrain – Working with Feed-Forward Networks A feed-forward network is a neural network, where the information between nodes moves in the forward direction and will never travel backward. Feed Forward network is the first and the simplest one among the networks available in the artificial neural network. The information is passed from the input nodes, next to the hidden nodes and later to the output node. In this chapter we are going to discuss how to − Create Feed-Forward Networks Add Connection and Modules to FFN Creating a Feed Forward Network You can use the python IDE of your choice, i.e., PyCharm. In this, we are using Visual Studio Code to write the code and will execute the same in terminal. To create a feedforward network, we need to import it from pybrain.structure as shown below − ffn.py from pybrain.structure import FeedForwardNetwork network = FeedForwardNetwork() print(network) Execute ffn.py as shown below − C:pybrainpybrainsrc>python ffn.py FeedForwardNetwork-0 Modules: [] Connections: [] We have not added any modules and connections to the feedforward network. Hence the network shows empty arrays for Modules and Connections. Adding Modules and Connections First we will create input, hidden, output layers and add the same to the modules as shown below − ffy.py from pybrain.structure import FeedForwardNetwork from pybrain.structure import LinearLayer, SigmoidLayer network = FeedForwardNetwork() #creating layer for input => 2 , hidden=> 3 and output=>1 inputLayer = LinearLayer(2) hiddenLayer = SigmoidLayer(3) outputLayer = LinearLayer(1) #adding the layer to feedforward network network.addInputModule(inputLayer) network.addModule(hiddenLayer) network.addOutputModule(outputLayer) print(network) Output C:pybrainpybrainsrc>python ffn.py FeedForwardNetwork-3 Modules: [] Connections: [] We are still getting the modules and connections as empty. We need to provide a connection to the modules created as shown below − Here is the code where we have created a connection between input, hidden and output layers and add the connection to the network. ffy.py from pybrain.structure import FeedForwardNetwork from pybrain.structure import LinearLayer, SigmoidLayer from pybrain.structure import FullConnection network = FeedForwardNetwork() #creating layer for input => 2 , hidden=> 3 and output=>1 inputLayer = LinearLayer(2) hiddenLayer = SigmoidLayer(3) outputLayer = LinearLayer(1) #adding the layer to feedforward network network.addInputModule(inputLayer) network.addModule(hiddenLayer) network.addOutputModule(outputLayer) #Create connection between input ,hidden and output input_to_hidden = FullConnection(inputLayer, hiddenLayer) hidden_to_output = FullConnection(hiddenLayer, outputLayer) #add connection to the network network.addConnection(input_to_hidden) network.addConnection(hidden_to_output) print(network) Output C:pybrainpybrainsrc>python ffn.py FeedForwardNetwork-3 Modules: [] Connections: [] We are still not able to get the modules and connections. Let us now add the final step, i.e., we need to add the sortModules() method as shown below − ffy.py from pybrain.structure import FeedForwardNetwork from pybrain.structure import LinearLayer, SigmoidLayer from pybrain.structure import FullConnection network = FeedForwardNetwork() #creating layer for input => 2 , hidden=> 3 and output=>1 inputLayer = LinearLayer(2) hiddenLayer = SigmoidLayer(3) outputLayer = LinearLayer(1) #adding the layer to feedforward network network.addInputModule(inputLayer) network.addModule(hiddenLayer) network.addOutputModule(outputLayer) #Create connection between input ,hidden and output input_to_hidden = FullConnection(inputLayer, hiddenLayer) hidden_to_output = FullConnection(hiddenLayer, outputLayer) #add connection to the network network.addConnection(input_to_hidden) network.addConnection(hidden_to_output) network.sortModules() print(network) Output C:pybrainpybrainsrc>python ffn.py FeedForwardNetwork-6 Modules: [<LinearLayer ”LinearLayer-3”gt;, <SigmoidLayer ”SigmoidLayer-7”>, <LinearLayer ”LinearLayer-8”>] Connections: [<FullConnection ”FullConnection-4”: ”SigmoidLayer-7” -> ”LinearLayer-8”>, <FullConnection ”FullConnection-5”: ”LinearLayer-3” -> ”SigmoidLayer-7”>] We are now able to see the modules and the connections details for feedforwardnetwork.

Learn PyBrain – Examples work project make money

PyBrain – Examples In this chapter, all possible examples which are executed using PyBrain are listed. Example 1 Working with NOR Truth Table and testing it for correctness. from pybrain.tools.shortcuts import buildNetwork from pybrain.structure import TanhLayer from pybrain.datasets import SupervisedDataSet from pybrain.supervised.trainers import BackpropTrainer # Create a network with two inputs, three hidden, and one output nn = buildNetwork(2, 3, 1, bias=True, hiddenclass=TanhLayer) # Create a dataset that matches network input and output sizes: norgate = SupervisedDataSet(2, 1) # Create a dataset to be used for testing. nortrain = SupervisedDataSet(2, 1) # Add input and target values to dataset # Values for NOR truth table norgate.addSample((0, 0), (1,)) norgate.addSample((0, 1), (0,)) norgate.addSample((1, 0), (0,)) norgate.addSample((1, 1), (0,)) # Add input and target values to dataset # Values for NOR truth table nortrain.addSample((0, 0), (1,)) nortrain.addSample((0, 1), (0,)) nortrain.addSample((1, 0), (0,)) nortrain.addSample((1, 1), (0,)) #Training the network with dataset norgate. trainer = BackpropTrainer(nn, norgate) # will run the loop 1000 times to train it. for epoch in range(1000): trainer.train() trainer.testOnData(dataset=nortrain, verbose = True) Output C:pybrainpybrainsrc>python testnetwork.py Testing on data: (”out: ”, ”[0.887 ]”) (”correct:”, ”[1 ]”) error: 0.00637334 (”out: ”, ”[0.149 ]”) (”correct:”, ”[0 ]”) error: 0.01110338 (”out: ”, ”[0.102 ]”) (”correct:”, ”[0 ]”) error: 0.00522736 (”out: ”, ”[-0.163]”) (”correct:”, ”[0 ]”) error: 0.01328650 (”All errors:”, [0.006373344564625953, 0.01110338071737218, 0.005227359234093431, 0.01328649974219942]) (”Average error:”, 0.008997646064572746) (”Max error:”, 0.01328649974219942, ”Median error:”, 0.01110338071737218) Example 2 For Datasets, we are going to use datasets from sklearn datasets as shown below: Refer load_digits datasets from sklearn: It has 10 classes, i.e., digits to be predicted from 0-9. The total input data in X is 64. from sklearn import datasets import matplotlib.pyplot as plt from pybrain.datasets import ClassificationDataSet from pybrain.utilities import percentError from pybrain.tools.shortcuts import buildNetwork from pybrain.supervised.trainers import BackpropTrainer from pybrain.structure.modules import SoftmaxLayer from numpy import ravel digits = datasets.load_digits() X, y = digits.data, digits.target ds = ClassificationDataSet(64, 1, nb_classes=10) ) # we are having inputs are 64 dim array and since the digits are from 0-9 the classes considered is 10. for i in range(len(X)): ds.addSample(ravel(X[i]), y[i]) # adding sample to datasets test_data_temp, training_data_temp = ds.splitWithProportion(0.25) #Splitting the datasets 25% as testdata and 75% as trained data # Using splitWithProportion() method on dataset converts the dataset to #superviseddataset, so we will convert the dataset back to classificationdataset #as shown in above step. test_data = ClassificationDataSet(64, 1, nb_classes=10) for n in range(0, test_data_temp.getLength()): test_data.addSample( test_data_temp.getSample(n)[0], test_data_temp.getSample(n)[1] ) training_data = ClassificationDataSet(64, 1, nb_classes=10) for n in range(0, training_data_temp.getLength()): training_data.addSample( training_data_temp.getSample(n)[0], training_data_temp.getSample(n)[1] ) test_data._convertToOneOfMany() training_data._convertToOneOfMany() net = buildNetwork( training_data.indim, 64, training_data.outdim, outclass=SoftmaxLayer ) #creating a network wherein the input and output are used from the training data. trainer = BackpropTrainer( net, dataset=training_data, momentum=0.1,learningrate=0.01,verbose=True,weightdecay=0.01 ) #Training the Network trnerr,valerr = trainer.trainUntilConvergence(dataset=training_data,maxEpochs=10) #Visualizing the error and validation data plt.plot(trnerr,”b”,valerr,”r”) plt.show() trainer.trainEpochs(10) print(”Percent Error on testData:”,percentError( trainer.testOnClassData(dataset=test_data), test_data[”class”] )) Output Total error: 0.0432857814358 Total error: 0.0222276374185 Total error: 0.0149012052174 Total error: 0.011876985318 Total error: 0.00939854792853 Total error: 0.00782202445183 Total error: 0.00714707652044 Total error: 0.00606068893793 Total error: 0.00544257958975 Total error: 0.00463929281336 Total error: 0.00441275665294 (”train-errors:”, ”[0.043286 , 0.022228 , 0.014901 , 0.011877 , 0.009399 , 0.007 822 , 0.007147 , 0.006061 , 0.005443 , 0.004639 , 0.004413 ]”) (”valid-errors:”, ”[0.074296 , 0.027332 , 0.016461 , 0.014298 , 0.012129 , 0.009 248 , 0.008922 , 0.007917 , 0.006547 , 0.005883 , 0.006572 , 0.005811 ]”) Percent Error on testData: 3.34075723830735

Learn PyBrain – Layers work project make money

PyBrain – Layers Layers are basically a set of functions that are used on hidden layers of a network. We will go through the following details about layers in this chapter − Understanding layer Creating Layer using Pybrain Understanding layers We have seen examples earlier where we have used layers as follows − TanhLayer SoftmaxLayer Example using TanhLayer Below is one example where we have used TanhLayer for building a network − testnetwork.py from pybrain.tools.shortcuts import buildNetwork from pybrain.structure import TanhLayer from pybrain.datasets import SupervisedDataSet from pybrain.supervised.trainers import BackpropTrainer # Create a network with two inputs, three hidden, and one output nn = buildNetwork(2, 3, 1, bias=True, hiddenclass=TanhLayer) # Create a dataset that matches network input and output sizes: norgate = SupervisedDataSet(2, 1) # Create a dataset to be used for testing. nortrain = SupervisedDataSet(2, 1) # Add input and target values to dataset # Values for NOR truth table norgate.addSample((0, 0), (1,)) norgate.addSample((0, 1), (0,)) norgate.addSample((1, 0), (0,)) norgate.addSample((1, 1), (0,)) # Add input and target values to dataset # Values for NOR truth table nortrain.addSample((0, 0), (1,)) nortrain.addSample((0, 1), (0,)) nortrain.addSample((1, 0), (0,)) nortrain.addSample((1, 1), (0,)) #Training the network with dataset norgate. trainer = BackpropTrainer(nn, norgate) # will run the loop 1000 times to train it. for epoch in range(1000): trainer.train() trainer.testOnData(dataset=nortrain, verbose = True) Output The output for the above code is as follows − python testnetwork.py C:pybrainpybrainsrc>python testnetwork.py Testing on data: (”out: ”, ”[0.887 ]”) (”correct:”, ”[1 ]”) error: 0.00637334 (”out: ”, ”[0.149 ]”) (”correct:”, ”[0 ]”) error: 0.01110338 (”out: ”, ”[0.102 ]”) (”correct:”, ”[0 ]”) error: 0.00522736 (”out: ”, ”[-0.163]”) (”correct:”, ”[0 ]”) error: 0.01328650 (”All errors:”, [0.006373344564625953, 0.01110338071737218, 0.005227359234093431, 0.01328649974219942]) (”Average error:”, 0.008997646064572746) (”Max error:”, 0.01328649974219942, ”Median error:”, 0.01110338071737218) Example using SoftMaxLayer Below is one example where we have used SoftmaxLayer for building a network − from pybrain.tools.shortcuts import buildNetwork from pybrain.structure.modules import SoftmaxLayer from pybrain.datasets import SupervisedDataSet from pybrain.supervised.trainers import BackpropTrainer # Create a network with two inputs, three hidden, and one output nn = buildNetwork(2, 3, 1, bias=True, hiddenclass=SoftmaxLayer) # Create a dataset that matches network input and output sizes: norgate = SupervisedDataSet(2, 1) # Create a dataset to be used for testing. nortrain = SupervisedDataSet(2, 1) # Add input and target values to dataset # Values for NOR truth table norgate.addSample((0, 0), (1,)) norgate.addSample((0, 1), (0,)) norgate.addSample((1, 0), (0,)) norgate.addSample((1, 1), (0,)) # Add input and target values to dataset # Values for NOR truth table nortrain.addSample((0, 0), (1,)) nortrain.addSample((0, 1), (0,)) nortrain.addSample((1, 0), (0,)) nortrain.addSample((1, 1), (0,)) #Training the network with dataset norgate. trainer = BackpropTrainer(nn, norgate) # will run the loop 1000 times to train it. for epoch in range(1000): trainer.train() trainer.testOnData(dataset=nortrain, verbose = True) Output The output is as follows − C:pybrainpybrainsrc>python example16.py Testing on data: (”out: ”, ”[0.918 ]”) (”correct:”, ”[1 ]”) error: 0.00333524 (”out: ”, ”[0.082 ]”) (”correct:”, ”[0 ]”) error: 0.00333484 (”out: ”, ”[0.078 ]”) (”correct:”, ”[0 ]”) error: 0.00303433 (”out: ”, ”[-0.082]”) (”correct:”, ”[0 ]”) error: 0.00340005 (”All errors:”, [0.0033352368788838365, 0.003334842961037291, 0.003034328685718761, 0.0034000458892589056]) (”Average error:”, 0.0032761136037246985) (”Max error:”, 0.0034000458892589056, ”Median error:”, 0.0033352368788838365) Creating Layer in Pybrain In Pybrain, you can create your own layer as follows − To create a layer, you need to use NeuronLayer class as the base class to create all type of layers. Example from pybrain.structure.modules.neuronlayer import NeuronLayer class LinearLayer(NeuronLayer): def _forwardImplementation(self, inbuf, outbuf): outbuf[:] = inbuf def _backwardImplementation(self, outerr, inerr, outbuf, inbuf): inerr[:] = outer To create a Layer, we need to implement two methods: _forwardImplementation() and _backwardImplementation(). The _forwardImplementation() takes in 2 arguments inbuf and outbuf, which are Scipy arrays. Its size is dependent on the layers’ input and output dimensions. The _backwardImplementation() is used to calculate the derivative of the output with respect to the input given. So to implement a layer in Pybrain, this is the skeleton of the layer class − from pybrain.structure.modules.neuronlayer import NeuronLayer class NewLayer(NeuronLayer): def _forwardImplementation(self, inbuf, outbuf): pass def _backwardImplementation(self, outerr, inerr, outbuf, inbuf): pass In case you want to implement a quadratic polynomial function as a layer, we can do so as follows − Consider we have a polynomial function as − f(x) = 3×2 The derivative of the above polynomial function will be as follows − f(x) = 6 x The final layer class for the above polynomial function will be as follows − testlayer.py from pybrain.structure.modules.neuronlayer import NeuronLayer class PolynomialLayer(NeuronLayer): def _forwardImplementation(self, inbuf, outbuf): outbuf[:] = 3*inbuf**2 def _backwardImplementation(self, outerr, inerr, outbuf, inbuf): inerr[:] = 6*inbuf*outerr Now let us make use of the layer created as shown below − testlayer1.py from testlayer import PolynomialLayer from pybrain.tools.shortcuts import buildNetwork from pybrain.tests.helpers import gradientCheck n = buildNetwork(2, 3, 1, hiddenclass=PolynomialLayer) n.randomize() gradientCheck(n) GradientCheck() will test whether the layer is working fine or not.We need to pass the network where the layer is used to gradientCheck(n).It will give the output as “Perfect Gradient” if the layer is working fine. Output C:pybrainpybrainsrc>python testlayer1.py Perfect gradient