## Hot questions for Using Neural networks in for loop

Question:

I have been learning about ANN but the book I'm reading has examples in Python. The problem is that I have never written in Python and these lines of code are too hard for me to understand:

```sizes = [3,2,4]
self.weights = [np.random.randn(y, x)
for x, y in zip(sizes[:-1], sizes[1:])]
```

I read some things about it and found out that the `randn()` function returns an array with `y` elements and `x` dimensions populated with random numbers between 0 and 1. `zip()` connects two arrays into one. `sizes[:-1]` returns the last element and `sizes[1:]` return the array without its first element.

But with all of this I still can't explain to myself what would this generate.

`sizes[:-1]` will return the sublist `[3,2]` (that is, all the elements except the last one).

`sizes[1:]` will return the sublist `[2,4]` (that is, all the elements except the first one).

`zip([a,b], [c,d])` gives `[(a,c), (b,d)]`.

So zipping the two lists above gives you `[(3,2), (2,4)]`

The construction of weights is a list comprehension. Therefore this code is equivalent to

```weights = []

for x,y in [(3,2), (2,4)]:
weights.append(np.random.randn(y, x))
```

So the final result would be the same as

```[ np.random.randn(2,3),
np.random.randn(4,2) ]
```

Question:

I'm new to neural networking and I'm trying to edit the example in the above tutorial to match my problem. I'm using multiple regression to find coefficients for 3 different sets of data and I then calculate the rsquared value for each set of data. I'm trying to create a neural network that will change the coefficient value to get the rsquared value as close to 100 as possible.

This is how I establish the coefficient and find the rsquared value for that coefficient. All 3 coefficients use these same methods:

```Calculations calc = new Calculations();
Vector<double> lowRiskCoefficient = MultipleRegression.QR(                                            Matrix<double>.Build.DenseOfColumnArrays(lowRiskShortRatingList.ToArray(), lowRiskMediumRatingList.ToArray(), lowRiskLongRatingList.ToArray()),                                            Vector<double>.Build.Dense(lowRiskWeekReturnList.ToArray()));
decimal lowRiskShortCoefficient = Convert.ToDecimal(lowRiskCoefficient[0]);
decimal lowRiskMediumCoefficient = Convert.ToDecimal(lowRiskCoefficient[1]);
decimal lowRiskLongCoefficient = Convert.ToDecimal(lowRiskCoefficient[2]);
List<decimal> lowRiskWeekReturnDecimalList = new List<decimal>(lowRiskWeekReturnList.Count);
List<decimal> lowRiskPredictedReturnList = new List<decimal>(lowRiskWeekReturnList.Count);
List<decimal> lowRiskResidualValueList = new List<decimal>(lowRiskWeekReturnList.Count);
for (int i = 0; i < lowRiskWeekReturnList.Count; i++)
{
decimal lowRiskPredictedValue = (Convert.ToDecimal(lowRiskShortRatingList.ElementAtOrDefault(i)) * lowRiskShortCoefficient) + (Convert.ToDecimal(lowRiskMediumRatingList.ElementAtOrDefault(i)) * lowRiskMediumCoefficient) +
(Convert.ToDecimal(lowRiskLongRatingList.ElementAtOrDefault(i)) * lowRiskLongCoefficient);
}
decimal lowRiskTotalSumofSquares = calc.calculateTotalSumofSquares(lowRiskWeekReturnDecimalList, lowRiskWeekReturnDecimalList.Average());
decimal lowRiskTotalSumofRegression = calc.calculateTotalSumofRegression(lowRiskPredictedReturnList, lowRiskWeekReturnDecimalList.Average());
decimal lowRiskTotalSumofErrors = calc.calculateTotalSumofErrors(lowRiskResidualValueList);
decimal lowRiskRSquared = lowRiskTotalSumofRegression / lowRiskTotalSumofSquares;
```

This is the example that performs the training and I'm currently stuck on how to change this example to match what I'm trying to do.

```private void button1_Click(object sender, EventArgs e)
{
net = new NeuralNet();
double high, mid, low;
high = .9;
low = .1;
mid = .5;
// initialize with
//   2 perception neurons
//   2 hidden layer neurons
//   1 output neuron
net.Initialize(1, 2, 2, 1);
double[][] input = new double[4][];
input[0] = new double[] {high, high};
input[1] = new double[] {low, high};
input[2] = new double[] {high, low};
input[3] = new double[] {low, low};
double[][] output = new double[4][];
output[0] = new double[] { low };
output[1] = new double[] { high };
output[2] = new double[] { high };
output[3] = new double[] { low };
double ll, lh, hl, hh;
int count;
count = 0;
do
{
count++;
for (int i = 0; i < 100; i++)
net.Train(input, output);
net.ApplyLearning();
net.PerceptionLayer[0].Output = low;
net.PerceptionLayer[1].Output = low;
net.Pulse();
ll = net.OutputLayer[0].Output;
net.PerceptionLayer[0].Output = high;
net.PerceptionLayer[1].Output = low;
net.Pulse();
hl = net.OutputLayer[0].Output;
net.PerceptionLayer[0].Output = low;
net.PerceptionLayer[1].Output = high;
net.Pulse();
lh = net.OutputLayer[0].Output;
net.PerceptionLayer[0].Output = high;
net.PerceptionLayer[1].Output = high;
net.Pulse();
hh = net.OutputLayer[0].Output;
}
while (hh > mid || lh < mid || hl < mid || ll > mid);
MessageBox.Show((count*100).ToString() + " iterations required for training");
}
```

How do I use this information to create a neural network to find the coefficient that will in turn have a rsquared value as close to 100 as possible?

Instead of building one, you can use Neuroph framework built in the .NET by using the Neuroph.NET from here https://github.com/starhash/Neuroph.NET/releases/tag/v1.0-beta

It is a light conversion of the original Neuroph they did for the JAVA platform.

Hope this helps you.

Question:

I have a working neural network loop so I can run neural networks using a predetermined number of nodes in my hidden layer ('nodes_list'). I then calculate the area under the ROC curve for each number of nodes and put this in a list ('roc_outcomes') for plotting purposes. However, I would like to loop over this loop 5 times to get an average area under the ROC curve for each of the three models (model 1: 20 nodes in hidden layer, model 2: 28 nodes in hidden layer, model 3: 38 nodes in hidden layer). This works fine when I am only trying it on one model, but when I iterate over more than one model instead of iterating over model 1 5 times, then model 2 5 times, then model 3 5 times....it iterates over model 1, then model 2, then model 3, and it does this 5 times. The purpose of this nested loop is for me to iterate over each neural network model 5 times, put the area under the ROC curve for each iteration into a list, calculate a mean of that list, and put the mean into a new list. Ultimately, I would like to have a list of three numbers (1 for each model) that is the mean area under the ROC curve for the 5 iterations of that model. Hopefully, I explained this well. Please ask for any clarification.

Here is my code:

```nodes_list = [20, 28, 38]  # list with number of nodes in hidden layer per model
roc_outcomes = [] # list of ROC AUC

for i in np.arange(1,6):

for nodes in nodes_list:

# Compile model
# Fit model
model.fit(X, y, validation_split=0.33, epochs=epochs, callbacks=early_stopping_monitor, verbose=True)

# Get predicted probabilities
pred_prob = model.predict_proba(X)[:,1]

# Calculate area under the curve (logit_roc_auc)
logit_roc_auc = roc_auc_score(y[:,1], pred_prob)
# Append roc scores to the roc_outcomes list
roc_outcomes.append(logit_roc_auc)

# Get the mean of that list
mean_roc = np.mean(roc_outcomes)
# Append to another list
mean_roc_outcomes = []
mean_roc_outcomes.append(mean_roc)
```

```for nodes in node_list:
for i in range(0,5):
```

example:

```myList = ['a', 'b', 'c']
for item in myList:
for i in range(0,5):
print(item, end=", ")
```

output:

```a, a, a, a, a, b, b, b, b, b, c, c, c, c, c,
```

Question:

Attempting to write code to generate layers in a Neural Network, and i'm just trying to work out how to assign a value to a variable belonging to the Neuron itself. The structure of a standard Neuron in my code is contained in a class called XORNeuron, and i'm calling this class in a FOR loop in order to generate a certain amount of neurons depending on how many neurons the layer is assigned when it is created.

I'm using an Array to do this, however I was wondering how I would assign the number of inputs the layer is told each neuron it contains has. The number of neurons and number of inputs are both arguments provided upon the Layer Constructor being called, meaning new layers can be created easily and tweaked with regards to their size and relative number of inputs.

The weights are all automatically generated for each input in a FOR loop in the Neuron class itself, depending on a variable the Neuron class holds called "numInputs". I'm attempting to write a FOR loop that will generate a new Neuron instance for the number of neurons the layer is told it holds, and then assign the number of inputs to the "numInputs" variable the Neuron class holds, so it will be able to generate the weights correctly.

My code is as such:

```public class XORlayer {

// Create a constructor to create layers of neurons.
// Means we only have to set it up once and can feed it
// settings from then on for tweaking.

XORlayer(int numNeurons,int inpNum)
{
XORNeuron[] layerLength = new XORNeuron[numNeurons];

for(int neuCount = 1; neuCount <= numNeurons; neuCount++)
{
layerLength[neuCount-1] = new XORNeuron();

}
}

}
```

Either by calling a setter in the created neuron

```XORNeuron[] layerLength = new XORNeuron[numNeurons];

for(int neuCount = 0; neuCount < numNeurons; neuCount++) {
layerLength[neuCount] = new XORNeuron();
layerLength[neuCount].setNumInput(inpNum);
}
}
```

or by adding the input count to the neuron constructor, so you can do

```layerLength[neuCount] = new XORNeuron(inpNum);
```

(Note: I changed the array indexing to a 0-based for loop, since that's idiomatic for Java).

Question:

I have 53 xls table (ch_1, ch_2, ...) and then use them as a input for Neural Network. After that write the NN result in a new xls and csv.

```clc
clear all
files=dir('*.xls');
for i=1:length(files(:,1))
fprintf('step: %d\n', i);
datanameXls = ['channel_' num2str(i) '.xls'];
datanameCsv = ['channel_' num2str(i) '.csv'];

a17=aa(:,1);
b17=aa(:,4);
p=size(a17);
c17=zeros(144,31);

% Create a Fitting Network
hiddenLayerSize = 10;
net = fitnet(hiddenLayerSize);

% Setup Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

% Train the Network
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
A= zeros(4464, 2);
A = [o, outputs'];
A(A<0)=0;
csvwrite(datanameCsv, A);
fprintf('csv is written \n');
xlswrite(datanameXls, A);
fprintf('xls is written \n');
end
```

The problem is: when i try this programm with one, two till 9 table, the Result which i save through xlswrite are true, but when i try it with 52 table, i get a false table because for example ch_1 is overwritten whith ch_10. Any IDEA???

I solved my problem. 'dir' read first ch_10 to ch_19 then ch_1. I did rename all my files and it works now correctly.I did the following to rename all files:

```    clc
clear all
files = dir('*.xls');
for k=1:length(files(:,1))
oldFileName = sprintf('ch_%dMonth_1.xls',k);
newFileName = sprintf('%03d.xls',k);
movefile(oldFileName,newFileName);
end
```

Question:

I'm training a neural network and want my program to feed forward the first 10 examples, then backprop, then loop over next 10 examples and backprop and so on.

Right now I have a code that loops over my whole data set for 5 epochs, but it would be better if it looped in small batches (also for 5 epochs for example).

My question is how to make a loop, based on the one I have, so that it loops over the first 10 `i`, then does the `Net.backward(rate, mse)` bit, resets the error sum `sum_error = 0` and then loops over the next 10 `i` and so on, for the whole dataset (I have 800 examples). I don't know how to achieve that. Should I insert some kind of `i` counter, like `i = i+1`?

```for j in range(5):
for i, pattern in enumerate(X):
Net.net_error(y[i], X[i])
sum_error = sum_error + np.square(Net.net_error(y[i],X[i]))
mse = (sum_error) / (len(X))
print(f" # {str(j)}{mse}")
Net.backward(rate, mse)
sum_error = 0
```

The code that is responsible for `net_error` part:

```def feed_forward(self, X):
self.z1 = np.dot(X, self.input_to_hidden1_w)
self.z1_a = self.activation(self.z1)

self.z2 = np.dot(self.z1_a, self.hidden1_to_hidden2_w)
self.z2_a = self.activation(self.z2)

self.z3 = np.dot(self.z2_a, self.hidden2_to_output_w)
self.output = self.activation(self.z3)
return self.output

def net_error(self, y, X):
net_error = y - self.feed_forward(X)
return net_error
```

as to your original question, I think you might want to do something like:

```num_epochs = 5
batch_size = 10

for epoch in range(num_epochs):
perm_idx = np.random.permutation(len(X))

for ix in range(0, len(perm_idx), batch_size):
batch_indicies = perm_idx[ix:ix+batch_size]

sum_error = 0
for i in batch_indicies:
sum_error += np.square(Net.net_error(y[i], X[i]))

Net.backward(rate, sum_error / len(X))
```

note that I'm using `permutation` to get random subsets of the data in each batch, which might help with biases

Question:

The increase in network size is not the cause(problem)

here is my code

```for i in [32, 64, 128, 256, 512]:
for j in [32, 64, 128, 256, 512]:
for k in [32, 64, 128, 256, 512]:
for l in [0.1, 0.2, 0.3, 0.4, 0.5]:

model = Sequential()

model.compile(~)

hist = model.fit(~)

plt.savefig(str(count) + '.png')
plt.clf()

f = open(str(count) + '.csv', 'w')
text = ~
f.write(text)
f.close()
count+=1
print()
print("count :" + str(count))
print()
```

I started `count` to 0

when `count` is 460~ 479 the epoch time is

```Train on 7228 samples, validate on 433 samples
Epoch 1/10
- 2254s - loss: 0.0045 - acc: 1.3835e-04 - val_loss: 0.0019 - val_acc: 0.0000e+00
Epoch 2/10
- 86s - loss: 0.0020 - acc: 1.3835e-04 - val_loss: 0.0030 - val_acc: 0.0000e+00
Epoch 3/10
- 85s - loss: 0.0017 - acc: 1.3835e-04 - val_loss: 0.0016 - val_acc: 0.0000e+00
Epoch 4/10
- 86s - loss: 0.0015 - acc: 1.3835e-04 - val_loss: 1.6094e-04 - val_acc: 0.0000e+00
Epoch 5/10
- 86s - loss: 0.0014 - acc: 1.3835e-04 - val_loss: 1.4120e-04 - val_acc: 0.0000e+00
Epoch 6/10
- 85s - loss: 0.0013 - acc: 1.3835e-04 - val_loss: 3.8155e-04 - val_acc: 0.0000e+00
Epoch 7/10
- 85s - loss: 0.0012 - acc: 1.3835e-04 - val_loss: 4.1694e-04 - val_acc: 0.0000e+00
Epoch 8/10
- 85s - loss: 0.0012 - acc: 1.3835e-04 - val_loss: 4.8163e-04 - val_acc: 0.0000e+00
Epoch 9/10
- 86s - loss: 0.0011 - acc: 1.3835e-04 - val_loss: 3.8670e-04 - val_acc: 0.0000e+00
Epoch 10/10
- 85s - loss: 9.9018e-04 - acc: 1.3835e-04 - val_loss: 0.0016 - val_acc: 0.0000e+00
```

but when I restart pycharm and `count` is 480

epoch time is

```Train on 7228 samples, validate on 433 samples
Epoch 1/10
- 151s - loss: 0.0071 - acc: 1.3835e-04 - val_loss: 0.0018 - val_acc: 0.0000e+00
Epoch 2/10
- 31s - loss: 0.0038 - acc: 1.3835e-04 - val_loss: 0.0014 - val_acc: 0.0000e+00
Epoch 3/10
- 32s - loss: 0.0031 - acc: 1.3835e-04 - val_loss: 2.0248e-04 - val_acc: 0.0000e+00
Epoch 4/10
- 32s - loss: 0.0026 - acc: 1.3835e-04 - val_loss: 3.7600e-04 - val_acc: 0.0000e+00
Epoch 5/10
- 32s - loss: 0.0021 - acc: 1.3835e-04 - val_loss: 4.3882e-04 - val_acc: 0.0000e+00
Epoch 6/10
- 32s - loss: 0.0020 - acc: 1.3835e-04 - val_loss: 0.0037 - val_acc: 0.0000e+00
Epoch 7/10
- 32s - loss: 0.0021 - acc: 1.3835e-04 - val_loss: 1.2072e-04 - val_acc: 0.0000e+00
Epoch 8/10
- 32s - loss: 0.0019 - acc: 1.3835e-04 - val_loss: 0.0031 - val_acc: 0.0000e+00
Epoch 9/10
- 33s - loss: 0.0018 - acc: 1.3835e-04 - val_loss: 0.0051 - val_acc: 0.0000e+00
Epoch 10/10
- 33s - loss: 0.0018 - acc: 1.3835e-04 - val_loss: 3.2728e-04 - val_acc: 0.0000e+00
```

I just started it again, but the epoch time was faster.

I don't know why this happened.

In the Python 3.6 version, I use tensorflow-gpu 1.13.1 version, and Cuda uses 10.0 version. OS is a Windows 10 1903 pro version and OS build uses 18362.239 Pycharm uses a 2019.1.1 community version.

I just used the for loop, and I wonder why this happened.

I changed the number of units in the for loop.

I also saved the figure with a plt.savefig, and saved the data in .csv format.

And I also ask how to solve it.

You should use:

```from keras import backend as K`
K.clear_session()
```

before creating the model (i.e. `model=Sequential()`). That's because:

Ops are not garbage collected by TF so you always add more node to the graph.

So if we don't use `K.clear_session`, then memory leak occurs.

Thanks to @dref360 at keras.io in Slack.