Hot questions for Using Neural networks in neurolab

Question:

I am using neurolab in python to create a neural-netowork. I create a newff network and am using the default train_bfgs training function. My problem is a lot of times, the training just ends way before either the epochs run out or even the error target is reached. I looked around and found a post on neurolabs github page, where they kinda explained why this was happening. My problem is, if I rerun the program a few times it just catches-on and the training starts and then the error also falls (probably some random starting weights are a lot better then the others). What I want to do is to put a kind of check in the training so that if the error is too high and the epochs it trained are not even close to the total then retrain the network (sort of like rerunning the program) (maybe resetting the network default weights)

Here is what I have written, but obviously it doesnt work

trainingComplete = False
while not trainingComplete:
    error = net.train(trainingData, TS, epochs=50, show=10, goal=0.001)
    if len(error) < 0.8*epochs:
        if len(error) > 0 and min(error) < 0.01:
            trainingComplete = True
        else:
            net.reset()
            continue
    else:
        trainingComplete = True

what is going on is, when it passes the first condition, i.e. too few training epochs, it executes the net.reset() before restarting, but then on, there is no training that is happening and this becomes an infinite loop. Any idea what I am missing?

Thanks


Answer:

So, Since this went un answered for a few days, and I think its really bad for SO so I took it upon my self to find a working work-around. I tired restarting the script by using the os.execv(__file__, sys.argv), but on my mac that is always a permission problem, plus its just too dirty, so here is how i get it to work now.

# Train network
print('Starting training....')
trainingComplete = False
while not trainingComplete:
    error = net.train(trainingData, TS, epochs=epochs, show=10, goal=0.001)
    if len(error) < 0.8 * epochs:
       if len(error) > 0 and min(error) < 0.01:
           trainingComplete = True
       else:
           print('Restarting....')
           net = createNeuralNetwork(trainingData, [hidden], 1)
           net.trainf = train_bfgs
    else:  
       trainingComplete = True

Its pretty hacky but kinda works:

Starting training....
Restarting....
Restarting....
Restarting....
Restarting....
Restarting....
Restarting....
Restarting....
Restarting....
Epoch: 10; Error: 1.46314116045;
Epoch: 20; Error: 0.759613243435;
Epoch: 30; Error: 0.529574731856;
.
.

Hope that helps some one

Question:

I am pretty new in using python and neurolab and I have a problem with the training of my feed forward neural network. I have built the net as following:

net = nl.net.newff([[-1,1]]*64, [60,1])
net.init()
testerr = net.train(InputT, TargetT, epochs=100, show=1)

and my target output is a vector between 0 and 4. When I use the nl.train.train_bfgs I have in the console:

testerr = net.train(InputT, TargetT, epochs=10, show=1)
Epoch: 1; Error: 55670.4462766;
Epoch: 2; Error: 55649.5;

As you can see, I fixed the number of epochs to 100 but it stops at the second epoch and after the test of the net with Netresults=net.sim(InputCross) I have as test output array a vector of 1 (totally wrong). If I use the other training functions I have the same output testing vector full of 1 but in that case during the training, the epochs reach the number that I set but the error displayed doesn't change. The same if the target output vector is between -1 and 1. Any suggestion? Thank you very much!


Answer:

Finally, after a few hours with the same problem I kind of solved the problem.

Here is what is happening: Neurolab is using train_bfgs as its standard training algorithm. train_bfgs runs fmin_bfgs from scipy.optimize. As argument a function, epochf, is given. This function MUST be run after each iteration when training the network, in order for neurolab to exit propperly. Sadly, fmin_bfgs fails to do this when "optimization terminated successfully" (one can pass self.kwargs['disp'] = 1 to fmin_bfgs from /neurolab/train/spo.py to see output from scipy). I have not investigated further why fmin_bfgs returns "optimization terminated successfully" but it has to do with that the error is converging.

I have tried python 2.7 and python 3 with scipy versions 12.0 to 0.15 without this behavior changing (as this suggested).

My solution is to simply switch from train_bfgs training to regular train_gd (gradient descent) but I guess any other training algorithm is fine.

net = nl.net.newff(inputNodes, [ hidden, output])

#change traning func
net.trainf = nl.train.train_gd

For completeness, The code I tested on was:

import neurolab as nl
hidden = 10
output = 1
test = [[0],[0],[0],[1],[1]]
net = nl.net.newff([[0,1]], [ 10, 1])
err = net.train(test, test, epochs = 500, show=1)

The problem only occurs sometimes so repeted tests is needed

Edit: the problem is also described at https://github.com/zueve/neurolab/issues/25

Good luck!

Question:

I use the Elman recurrent network from neurolab to predict a time series of continuous values. The network is trained from a sequence such that the input is the value at index i and the target is the value at index i+1.

To make predictions beyond the immediate next time step, the output of the net is feed back as input. If, for example, I intend to predict the value at i+5, I proceed as follows.

  1. Input the value from i
  2. Take the output and feed it to the net the as next input value (e.g. i+1)
  3. Repeat 1. to 3. four more times
  4. The output is a prediction of the value ati+5

So for predictions beyond the immediate next time step, recurrent networks must be activated with the output from a previous activation.

In most examples, however, the network is fed with an already complete sequence. See, for example, the functions train and sim in the example behind the link above. The first function trains the network with an already complete list of examples and the second function activates the network with a complete list of input values.

After some digging in neurolab, I found the function step to return a single output for a single input. Results from using step suggest, however, that the function does not retain the activation of the recurrent layer, which is crucial to recurrent networks.

How can I activate a recurrent Elman network in neurolab with a single input such that it maintains its internal state for the next single input activation?


Answer:

It turns out it is quite normal for output which is generated from previous output sooner or later to converge towards a constant value. In effect, the output of a network cannot depend only on its previous output.

Question:

Is there a simple way to show the bias or weight for each property that I feed into a ANN developed using neurolab after it has already been trained?


Answer:

Yes you can see all the layer's weights and biases. through using

net.layers[i].np['w'] for weights

net.layers[i].np['b'] for biases

To change them manually yourself you just have to use [:] added to the end and set them to a numpy array.

here's a sample test code that i used on a simple network with 3 layers (1 input layer, 1 hidden layer and 1 output layer).

import neurolab as nl
import numpy as np

net = nl.net.newff([[0,1]] * 3, [4,2])

net.save("test.net")

net = nl.load("test.net")
# show layer weights and biases
for i in range(0,len(net.layers)):
    print "Net layer", i
    print net.layers[i].np['w']
    print "Net bias", i
    print net.layers[i].np['b']

#try setting layer weights
net.layers[0].np['w'][:] = np.array ([[0,1,2],  
                                     [3,4,5],  
                                     [4,5,6],  
                                     [6,7,8]]
                                     )


# show layer weights and biases 
for i in range(0,len(net.layers)):
    print "Net layer", i
    print net.layers[i].np['w']
    print "Net bias", i
    print net.layers[i].np['b']

Question:

I just installed Neurolab and I tried one the provided examples (Feed Forward Multilayer Perceptron (newff)):

import neurolab as nl
import numpy as np

# Create train samples
x = np.linspace(-7, 7, 20)
y = np.sin(x) * 0.5

size = len(x)

inp = x.reshape(size,1)
tar = y.reshape(size,1)

# Create network with 2 layers and random initialized
net = nl.net.newff([[-7, 7]],[5, 1])

# Train network
error = net.train(inp, tar, epochs=500, show=100, goal=0.02)

# Simulate network
out = net.sim(inp)

But I encounter this error.

Traceback (most recent call last):
  File "C:/Python27/newff.py", line 17, in <module>
    error = net.train(inp, tar, epochs=500, show=100, goal=0.02)
  File "build\bdist.win32\egg\neurolab\core.py", line 165, in train
    return self.trainf(self, *args, **kwargs)
  File "build\bdist.win32\egg\neurolab\core.py", line 349, in __call__
    train(net, *args)
  File "build\bdist.win32\egg\neurolab\train\spo.py", line 73, in __call__
    from scipy.optimize import fmin_bfgs
ImportError: No module named scipy.optimize

Answer:

You removed training routine call, which is probably setting .ci attribute in the network object. Thus the error is not in the example but in your modification.

update (OP changed quetsion)

Now the problem is extremely simple - you do not have scipy installed.

Question:

I am using neurolab to simulate a neural network to classify a dataset into a binary classification.

I have the data in a dataframe.I am creating a neural network with one input value and one output value and 10 hidden nodes.

df_train = pd.read_csv("training.csv")
target = df_train['outputcol'] # already encoded into 0's and 1's
inp = df_train.INPUT_AMT.values.reshape(df_train.INPUT_AMT.count(),1)
output = target.reshape(len(target),1) #reshaping into a matrix

Then I create a model, min_input and max_input are calculated :

net = nl.net.newff([[min_input,max_imput]], [10,1]) 
error = net.train(inp,output)
out = net.sim(inp)

This is the contents of the variable out:

array([[ 0.46434608],
   [ 0.47084458],
   [ 0.46583954],
   ..., 
   [ 0.46898838],
   [ 0.22519667],
   [ 0.46541441]])

How is this supposed to be interpreted?


Answer:

The answer on our Piazza page (UC Berkeley, INFO290t) seems correct:

I believe you're supposed to round the output to zero or one and then compare the rounded results with your target to determine the accuracy.

"If the output of your classifier is a continuous value between 0 and 1, such as with a neural network with sigmoid output node, then you can round the output. For example, if a prediction of 0.80 was made, this would round to 1 (Romney)... For Neural Networks, you should map Obama to 0 and Romney to 1."

Question:

I already know how to train a neural net with NeuroLab and get the error every X epochs, but I want to get the final error after training the net.

nn = nl.net.newff([[min_val, max_val]], [40, 26, 1])

# Gradient descent
nn.trainf = nl.train.train_gd

# Train the neural network
error_progress = nn.train(data, labels, epochs=6000, show=100, goal=0.0005)

# CODE TO GET THE ERROR AFTER TRAINING HERE
# final_error = ?

EDIT: By final_error I mean the final value of the Error variable that the net.train command plots (ONLY the error, not the complete string, as it plots in the following format).

Epoch: 1700; Error: 0.0005184049;


Answer:

Okay, so the best way I have found until now is to save the error progress and then get the last item in the array.

# Train the neural network
error_progress = net.train(data, labels, epochs=10000, show=100, goal=0.01)

# THIS IS THE LAST ERROR VALUE THE NET OUTPUTS
final_error = error_progress[-1]