Hot questions for Using Neural networks in nolearn

Question:

I'm using Lasagne to create a CNN for the MNIST dataset. I'm following closely to this example: Convolutional Neural Networks and Feature Extraction with Python.

The CNN architecture I have at the moment, which doesn't include any dropout layers, is:

NeuralNet(
    layers=[('input', layers.InputLayer),        # Input Layer
            ('conv2d1', layers.Conv2DLayer),     # Convolutional Layer
            ('maxpool1', layers.MaxPool2DLayer), # 2D Max Pooling Layer
            ('conv2d2', layers.Conv2DLayer),     # Convolutional Layer
            ('maxpool2', layers.MaxPool2DLayer), # 2D Max Pooling Layer
            ('dense', layers.DenseLayer),        # Fully connected layer
            ('output', layers.DenseLayer),       # Output Layer
            ],
    # input layer
    input_shape=(None, 1, 28, 28),

    # layer conv2d1
    conv2d1_num_filters=32,
    conv2d1_filter_size=(5, 5),
    conv2d1_nonlinearity=lasagne.nonlinearities.rectify,

    # layer maxpool1
    maxpool1_pool_size=(2, 2),

    # layer conv2d2
    conv2d2_num_filters=32,
    conv2d2_filter_size=(3, 3),
    conv2d2_nonlinearity=lasagne.nonlinearities.rectify,

    # layer maxpool2
    maxpool2_pool_size=(2, 2),


    # Fully Connected Layer
    dense_num_units=256,
    dense_nonlinearity=lasagne.nonlinearities.rectify,

   # output Layer
    output_nonlinearity=lasagne.nonlinearities.softmax,
    output_num_units=10,

    # optimization method params
    update= momentum,
    update_learning_rate=0.01,
    update_momentum=0.9,
    max_epochs=10,
    verbose=1,
    )

This outputs the following Layer Information:

  #  name      size
---  --------  --------
  0  input     1x28x28
  1  conv2d1   32x24x24
  2  maxpool1  32x12x12
  3  conv2d2   32x10x10
  4  maxpool2  32x5x5
  5  dense     256
  6  output    10

and outputs the number of learnable parameters as 217,706

I'm wondering how this number is calculated? I've read a number of resources, including this StackOverflow's question, but none clearly generalizes the calculation.

If possible, can the calculation of the learnable parameters per layer be generalised?

For example, convolutional layer: number of filters x filter width x filter height.


Answer:

Let's first look at how the number of learnable parameters is calculated for each individual type of layer you have, and then calculate the number of parameters in your example.

  • Input layer: All the input layer does is read the input image, so there are no parameters you could learn here.
  • Convolutional layers: Consider a convolutional layer which takes l feature maps at the input, and has k feature maps as output. The filter size is n x m. For example, this will look like this:

    Here, the input has l=32 feature maps as input, k=64 feature maps as output, and the filter size is n=3 x m=3. It is important to understand, that we don't simply have a 3x3 filter, but actually a 3x3x32 filter, as our input has 32 dimensions. And we learn 64 different 3x3x32 filters. Thus, the total number of weights is n*m*k*l. Then, there is also a bias term for each feature map, so we have a total number of parameters of (n*m*l+1)*k.

  • Pooling layers: The pooling layers e.g. do the following: "replace a 2x2 neighborhood by its maximum value". So there is no parameter you could learn in a pooling layer.
  • Fully-connected layers: In a fully-connected layer, all input units have a separate weight to each output unit. For n inputs and m outputs, the number of weights is n*m. Additionally, you have a bias for each output node, so you are at (n+1)*m parameters.
  • Output layer: The output layer is a normal fully-connected layer, so (n+1)*m parameters, where n is the number of inputs and m is the number of outputs.

The final difficulty is the first fully-connected layer: we do not know the dimensionality of the input to that layer, as it is a convolutional layer. To calculate it, we have to start with the size of the input image, and calculate the size of each convolutional layer. In your case, Lasagne already calculates this for you and reports the sizes - which makes it easy for us. If you have to calculate the size of each layer yourself, it's a bit more complicated:

  • In the simplest case (like your example), the size of the output of a convolutional layer is input_size - (filter_size - 1), in your case: 28 - 4 = 24. This is due to the nature of the convolution: we use e.g. a 5x5 neighborhood to calculate a point - but the two outermost rows and columns don't have a 5x5 neighborhood, so we can't calculate any output for those points. This is why our output is 2*2=4 rows/columns smaller than the input.
  • If one doesn't want the output to be smaller than the input, one can zero-pad the image (with the pad parameter of the convolutional layer in Lasagne). E.g. if you add 2 rows/cols of zeros around the image, the output size will be (28+4)-4=28. So in case of padding, the output size is input_size + 2*padding - (filter_size -1).
  • If you explicitly want to downsample your image during the convolution, you can define a stride, e.g. stride=2, which means that you move the filter in steps of 2 pixels. Then, the expression becomes ((input_size + 2*padding - filter_size)/stride) +1.

In your case, the full calculations are:

  #  name                           size                 parameters
---  --------  -------------------------    ------------------------
  0  input                       1x28x28                           0
  1  conv2d1   (28-(5-1))=24 -> 32x24x24    (5*5*1+1)*32   =     832
  2  maxpool1                   32x12x12                           0
  3  conv2d2   (12-(3-1))=10 -> 32x10x10    (3*3*32+1)*32  =   9'248
  4  maxpool2                     32x5x5                           0
  5  dense                           256    (32*5*5+1)*256 = 205'056
  6  output                           10    (256+1)*10     =   2'570

So in your network, you have a total of 832 + 9'248 + 205'056 + 2'570 = 217'706 learnable parameters, which is exactly what Lasagne reports.

Question:

Probably lots of people already saw this article by Google research:

http://googleresearch.blogspot.ru/2015/06/inceptionism-going-deeper-into-neural.html

It describes how Google team have made neural networks to actually draw pictures, like an artificial artist :)

I wanted to do something similar just to see how it works and maybe use it in future to better understand what makes my network to fail. The question is - how to achieve it with nolearn\lasagne (or maybe pybrain - it will also work but I prefer nolearn).

To be more specific, guys from Google have trained an ANN with some architecture to classify images (for example, to classify which fish is on a photo). Fine, suppose I have an ANN constructed in nolearn with some architecture and I have trained to some degree. But... What to do next? I don't get it from their article. It doesn't seem that they just visualize the weights of some specific layers. It seems to me (maybe I am wrong) like they do one of 2 things:

1) Feed some existing image or purely a random noise to the trained network and visualize the activation of one of the neuron layers. But - looks like it is not fully true, since if they used convolution neural network the dimensionality of the layers might be lower then the dimensionality of original image

2) Or they feed random noise to the trained ANN, get its intermediate output from one of the middlelayers and feed it back into the network - to get some kind of a loop and inspect what neural networks layers think might be out there in the random noise. But again, I might be wrong due to the same dimensionality issue as in #1

So... Any thoughts on that? How we could do the similar stuff as Google did in original article using nolearn or pybrain?


Answer:

From their ipython notebook on github:

Making the "dream" images is very simple. Essentially it is just a gradient ascent process that tries to maximize the L2 norm of activations of a particular DNN layer. Here are a few simple tricks that we found useful for getting good images:

  • offset image by a random jitter
  • normalize the magnitude of gradient
  • ascent steps apply ascent across multiple scales (octaves)

Question:

I have neural network created with using nolearn library.

net = NeuralNet(
layers=[
    ('input', layers.InputLayer),
    ('conv1', layers.Conv2DLayer),
    ('pool1', layers.MaxPool2DLayer),
    ('dropout1', layers.DropoutLayer), 
    ('conv2', layers.Conv2DLayer),
    ('pool2', layers.MaxPool2DLayer),
    ('dropout2', layers.DropoutLayer),
    ('conv3', layers.Conv2DLayer),
    ('pool3', layers.MaxPool2DLayer),
    ('dropout3', layers.DropoutLayer),
    ('hidden4', layers.DenseLayer),
    ('output', layers.DenseLayer),
    ],
input_shape=(None, 1, imgSize, imgSize),
conv1_num_filters=32, conv1_filter_size=(param1, param1), pool1_pool_size=(2, 2),
dropout1_p=0.4,
conv2_num_filters=64, conv2_filter_size=(param2, param2), pool2_pool_size=(2, 2),
dropout2_p=0.4,
conv3_num_filters=128, conv3_filter_size=(param3, param3), pool3_pool_size=(2, 2),
dropout3_p=0.4,
hidden4_num_units=1000,
output_num_units=classNum, output_nonlinearity=lasagne.nonlinearities.softmax,

update_learning_rate=0.01,
update_momentum=0.9,

regression=False,
max_epochs=100,
verbose=1,
) 
net.fit(trainD, trainL)

How can I get values of hidden layer neurons on some x? I wont to get that values and use them in some other algorithm to get better result.


Answer:

So, I found the solution.

hidden_layer = layers.get_output(net.layers_['hidden4'], deterministic=True)
input_var = net.layers_['input'].input_var
f_hidden = theano.function([input_var], hidden_layer)
instance = TestD[i][None, :, :, :]
pred = f_hidden(instance)

Question:

import theano.tensor as T
import numpy as np
from nolearn.lasagne import NeuralNet

def multilabel_objective(predictions, targets):
    epsilon = np.float32(1.0e-6)
    one = np.float32(1.0)
    pred = T.clip(predictions, epsilon, one - epsilon)
    return -T.sum(targets * T.log(pred) + (one - targets) * T.log(one - pred), axis=1)

net = NeuralNet(
    # your other parameters here (layers, update, max_epochs...)
    # here are the one you're interested in:
    objective_loss_function=multilabel_objective,
    custom_score=("validation score", lambda x, y: np.mean(np.abs(x - y)))
)

I found this code online and wanted to test it. It did work, the results include training loss, test loss, validation score and during time and so on.

But how can I get the F1-micro score? Also, if I was trying to import scikit-learn to calculate the F1 after adding the following code:

data = data.astype(np.float32) 
classes = classes.astype(np.float32)

net.fit(data, classes)

score = cross_validation.cross_val_score(net, data, classes, scoring='f1', cv=10)

print score

I got this error:

ValueError: Can't handle mix of multilabel-indicator and continuous-multioutput

How to implement F1-micro calculation based on above code?


Answer:

Suppose your true labels on the test set are y_true (shape: (n_samples, n_classes), composed only of 0s and 1s), and your test observations are X_test (shape: (n_samples, n_features)).

Then you get your net predicted values on the test set by y_test = net.predict(X_test).

If you are doing multiclass classification:

Since in your network you have set regression to False, this should be composed of 0s and 1s only, too.

You can compute the micro averaged f1 score with:

from sklearn.metrics import f1_score
f1_score(y_true, y_pred, average='micro')

Small code sample to illustrate this (with dummy data, use your actual y_test and y_true):

from sklearn.metrics import f1_score
import numpy as np


y_true = np.array([[0, 0, 1], [0, 1, 0], [0, 0, 1], [0, 0, 1], [0, 1, 0]])
y_pred = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 1], [0, 0, 1]])

t = f1_score(y_true, y_pred, average='micro')

If you are doing multilabel classification:

You are not outputting a matrix of 0 and 1, but a matrix of probabilities. y_pred[i, j] is the probability that observation i belongs to the class j.

You need to define a threshold value, above which you will say an observation belongs to a given class. Then you can attribute labels accordingly and proceed just the same as in the previous case.

thresh = 0.8  # choose your own value 
y_test_binary = np.where(y_test > thresh, 1, 0) 
# creates an array with 1 where y_test>thresh, 0 elsewhere

f1_score(y_true, y_pred_binary, average='micro')