Hot questions for Using Neural networks in visualization

Question:

I have CNN models trained using Keras with Tensorflow backend. And I want to visualize my CNN filters with this tutorial: https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html

from keras import backend as K
from keras.models import load_model
import numpy as np

model = load_model('my_cnn_model.h5')
input_img = np.load('my_picture.npy')

# get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers])

layer_name = 'block5_conv3'
filter_index = 0  # can be any integer from 0 to 511, as there are 512 filters in that layer

# build a loss function that maximizes the activation
# of the nth filter of the layer considered
layer_output = layer_dict[layer_name].output
loss = K.mean(layer_output[:, :, :, filter_index])

# compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, input_img)[0]

# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)

# this function returns the loss and grads given the input picture
iterate = K.function([input_img], [loss, grads])

However, when the code execute to this line: grads = K.gradients(loss, input_img)[0] I found it returns nothing but None object, so the program fail to progress after that.

I search for some solution. Some people say theinput_img should be tensorflow's Tensor type: https://github.com/keras-team/keras/issues/5455

But when I tried to convert the img to Tensor, the problem is still exist. I tried the solution in the link above, but still fail.

There is also someone say that this problem exists because your CNN model is not differentiable. https://github.com/keras-team/keras/issues/8478

But my model use only the activate function of ReLU and Sigmoid(at output layer). Is this problem really caused by nondifferentiable problem?

Can anyone help me? Thank you very much!


Answer:

If you have a Model instance, then to take the gradient of the loss with respect to the input, you should do:

grads = K.gradients(loss, model.input)[0]

model.input contains the symbolic tensor that represents the input to the model. Using a plain numpy array makes no sense because TensorFlow then has no idea how this connects to the computational graph, and returns None as the gradient.

Then you should also rewrite the iterate function as:

iterate = K.function([model.input], [loss, grads])

Question:

I've been trying to visualize heatmaps for Inception V3. It was my understanding the penultimate layer should be the last convolutional layer, which would be conv2d_94 (idx 299). However, this gives very coarse maps (big regions). I tried to use another layer mixed10 (idx 310) as suggested in this notebook for issue as described here and while the regions are smaller, it still doesn't look great. Some others do seem to use conv2d_94, like here.

I understand it might indicate my model is simply not paying attention to the right things, but also conceptually I'm confused which layer should be used. What is an appropriate penultimate layer?

I'm using Keras 2.2.0 with visualize_cam from keras-vis.

heatmap = visualize_cam(model, layer_idx, filter_indices=classnum, seed_input=preprocess_img, backprop_modifier=None)

Where layer_idx is the idx of dense_2.

I've tried not defining penultimate_layer, which according to the documentation sets the parameter to the nearest penultimate Conv or Pooling layer. This gives the same results as penultimate_layer=299.


Answer:

Cannot say anything about your own data, but the penultimate layer of Inception V3 for Grad-CAM visualization is indeed mixed10 (idx 310), as reported in the notebook you have linked to:

310 is concatenation before global average pooling

Rationale: since the output of conv2d_94 (299) is connected downstream with other convolutional layers (or concatenations of), like mixed9_1, concatenate_2 etc., by definition it cannot be the penultimate convolutional layer; mixed10, on the other hand, is not - on the contrary, it is just one layer before the final average pooling one. That the penultimate layer should be a convolutional, and not a pooling one, is suggested from Chollet's exchibition, where for VGG he uses block5_conv3, and not block5_pool which is immediately afterwards (although truth is, even using block5_pool seems to give very similar visual results).

Let me elaborate a little, and explain the emphasis on "suggested" above...

As many other things in current deep learning research & practice, Grad-CAM is a heuristic, not a "hard" scientific method; as such, there are recommendations & expectations on how to use it and what the results might be, but not hard rules (and "appropriate" layers). Consider the following excerpt from the original paper (end of section 2, emphasis mine):

We expect the last convolutional layers to have the best compromise between high-level semantics and detailed spatial information, so we use these feature maps to compute Grad-CAM and Guided Grad-CAM.

i.e. there are indeed recommendations & expectations, as I already said, but a certain experimenting & free-wheeling attitude is expected...


Now, assuming you are following Chollet's notebook on the subject (i.e. using pure Keras, and not the Keras-vis package), these are the changes in the code you need in order to make it work with Inception V3:

# cell 24
from keras import backend as K
from keras.applications.inception_v3 import InceptionV3
K.clear_session()
K.set_learning_phase(0) # needs to be set BEFORE building the model
model = InceptionV3(weights='imagenet')

# in cell 27
from keras.applications.inception_v3 import preprocess_input, decode_predictions
img = image.load_img(img_path, target_size=(299, 299)) # different size than VGG

# in cell 31:
last_conv_layer = model.get_layer('mixed10')
for i in range(2048):  # was 512 for VGG
    conv_layer_output_value[:, :, i] *= pooled_grads_value[i]

And the resulting superimposed heatmap on the original creative_commons_elephant.jpg image should look like this:

which, arguably, is not that different than the respective image by VGG produced in Chollet's notebook (although admittedly the heatmap is indeed more spread, and it does not seem to conform to Chollet's narrative about 'focusing on the ears')...

Question:

I have programmed a multilayer perception for binary classification. As I understand it, one hidden layer can be represented using just lines as decision boundaries (one line per hidden neuron). This works well and can easily be plotted just using the resulting weights after training.

However, as more layers are added I'm not sure about what approach to use and the visualization part is rarely handled in textbooks. I am wondering, is there a straight forward way of transforming the weight matrices from the different layers to this non-linear decision boundary (assuming 2D inputs)?

Many thanks,


Answer:

One of the approaches to plot decision boundaries (both for a linear or non-linear classifier) is to sample points in a uniform grid and feed them to the classifier. Asumming X is your data, you can create a uniform grid of points as follows:

h = .02  # step size in the mesh
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                     np.arange(y_min, y_max, h))

Then, you feed those coordinates to your perceptron to capture their prediction:

Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])

Assuming clf is your Perceptron, the np.c_ creates features from the uniformly sampled points, feeds them to the classifier and captures in Z their prediction.

Finally, plot the decision boundaries as a contour plot (using matplotlib):

Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)

And optionally, plot also your data points:

plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)

Fully working example, and credits for the example goes to scikit-learn (which btw, is a great machine learning library with a fully working Perceptron implemented).

Question:

I'm currently trying to visualize the learned filters of my CNN with tensorflow in python. I found many versions working with the mnist dataset while training a new network, but wasn't able to apply it to my application. I trained an Estimator object, which is stored on my disk, with a custom dataset. The model contains layers such as

conv1 = tf.layers.conv2d(inputs=input_layer, filters=32, kernel_size=[5, 5], padding="same", activation=tf.nn.sigmoid)

and only want to visialize a prediction on a single picture with dimensions (28,28,3). In tensorboard, this layer is simply called "conv2d", while the others are called "conv2d_2" and so on, it basically has the same structure as the default MNIST network, except it's using the sigmoid function.

I don't know how to implement this - I thought about getting the weights and biases and recomputing every single layer in respect to stride and filter size, but I'm already failing at getting the weights and I think there is a simpler solution.


Answer:

I'm currently trying to visualize the learned filters of my CNN with tensorflow in python.

I think what you mean is visualizing the activations of a specific layer ? If so, you just need to run the tensor of this layer for the image you want like so:

import matplotlib.pyplot as plt

# Model definition
...
conv1 = tf.layers.conv2d(inputs=input_layer, filters=32, kernel_size=[5, 5], padding="same", activation=tf.nn.sigmoid)
...

# Getting activations
acts = sess.run(conv1, {input_layer: your_image})

# Visualizing every filters
for i in range(acts.shape[3]):
  plt.imshow(acts[:,:,:,i].squeeze())
  plt.show()

If you are using Estimator, you can directly visualize the evolution of your activations using tf.summary.image() in your model_fn:

# In model_fn
...
conv1 = tf.layers.conv2d(inputs=input_layer, filters=32, kernel_size=[5, 5], padding="same", activation=tf.nn.sigmoid)
acts_filters = tf.unstack(conv1, axis=3)
for i, filter in enumerate(acts_filters):
    tf.summary.image('filter' + str(i), tf.expand_dims(filter, axis=3))

Question:

model = Sequential()

model.add(Embedding(10000, 300, input_length=200))
model.add(LSTM(256, return_sequences=True, dropout=0.5, recurrent_dropout=0.5))
model.add(LSTM(256, dropout=0.5, recurrent_dropout=0.5))
model.add(Dense(4, activation='softmax'))
...

plot_model(model, to_file='rnn.png' ,show_shapes=True, show_layer_names=True)

Why is it like (None,200) and not (200)?


Answer:

That is due to the batch size. When you train a model, you can pass through different batch sizes (e.g. 32, 64, ...).

This means that for instance, if you train a model with a batch size of 32, the first layer will have a shape of (32, 200) and so on.

When you build the model the input batch size is still not defined. That is why Tensorflow prints None.

Question:

I create variables as follows:

x = tf.placeholder(tf.float32, shape=[None, D], name='x-input') # M x D
# Variables Layer1
#std = 1.5*np.pi
std = 0.1
W1 = tf.Variable( tf.truncated_normal([D,D1], mean=0.0, stddev=std, name='W1') ) # (D x D1)
S1 = tf.Variable(tf.constant(100.0, shape=[1], name='S1')) # (1 x 1)
C1 = tf.Variable( tf.truncated_normal([D1,1], mean=0.0, stddev=0.1, name='C1') ) # (D1 x 1)

but for some reason tensorflow adds extra variable blocks in my visualization:

Why is it doing this and how do I stop it?


Answer:

You are incorrectly using names in TF

W1 = tf.Variable( tf.truncated_normal([D,D1], mean=0.0, stddev=std, name='W1') )
                  \----------------------------------------------------------/
                                           initializer 
     \-------------------------------------------------------------------------/
                                 actual variable

Thus your code creates unnamed variable, and names initializer op W1. This is why what you see in the graph named W1 is not your W1 but rather renamed initializer, and what should be your W1 is called Variable (as this is the default name TF assigns to unnamed ops). It should be

W1 = tf.Variable( tf.truncated_normal([D,D1], mean=0.0, stddev=std), name='W1' )

Which will create node named W1 for actual variable, and it will have a small initialization node attached (which is used to seed it random values).

Question:

I'm implementing an algorithm to classify messages into topics using a neural network. I'm wondering if there are library to help me visualize the training process like the one you can find here:

I'm going to do the classification of three-dimensional data instead of two-dimension. How can we do it?

Also, in the link above, I don't understand the first layer input where we have a fixed horizontal line and a vertical line perception. Can we set out the first layer of the neural network to be like that using sci-kit learn?


Answer:

Yes, but any good one won't be nearly as simple as the linked playground, and will require some familiarization. Below is a list of such libraries, & links to my answers w/ visualization functions you could adapt to your application:

  • iNNvestigate, classifier introspection (first image below)
  • Saliency maps, extracted features introspection
  • TensorBoard, full model introspection, highly configurable, but complex
  • LSTM/CNN Visualization, simple function (second image below)
  • Weights visualization, simple function

(Answer assumes you're using TensorFlow)