Hot questions for Using Neural networks in tf slim


Would the code below represent one or two layers? I'm confused because isn't there also supposed to be an input layer in a neural net?

input_layer = slim.fully_connected(input, 6000, activation_fn=tf.nn.relu)
output = slim.fully_connected(input_layer, num_output)

Does that contain a hidden layer? I'm just trying to be able to visualize the net. Thanks in advance!


You have a neural network with one hidden layer. In your code, input corresponds to the 'Input' layer in the above image. input_layer is what the image calls 'Hidden'. output is what the image calls 'Output'.

Remember that the "input layer" of a neural network isn't a traditional fully-connected layer since it's just raw data without an activation. It's a bit of a misnomer. Those neurons in the picture above in the input layer are not the same as the neurons in the hidden layer or output layer.


I have trained alexnet_v2 on my own dataset, and now would like to use it within another application. This should be very simple and I've tried to implement it in a number of ways but either I get errors I can't work around or (in the case of the code below) it hangs indefinitely.

Ideally, I'd like it in C++ (but the C++ API seems unreliable, or at least has outdated documentation in many places, so python is acceptable), and I'd like to be classifying large groups of images (for example: providing the program with 80 images of animals and returning whether any of them show a cat).

Am I going about this the right way with the code below? If so, how can I fix it.

If not, are there any working examples of a better way?

Many thanks.

import tensorflow as tf

#Using preprocessing and alexnet_v2 net from the slim examples

from nets import nets_factory
from preprocessing import preprocessing_factory

#Checkpoint file from training on binary dataset

checkpoint_path = '/home/ubuntu/tensorflow/models/slim/data/checkpoint.ckpt'

slim = tf.contrib.slim

number_of_classes = 2

image_filename = '/home/ubuntu/tensorflow/models/slim/data/images/neg_sample_123459.jpg'

image_filename_placeholder = tf.placeholder(tf.string)

image_tensor = tf.read_file(image_filename_placeholder)

image_tensor = tf.image.decode_jpeg(image_tensor, channels=3)

image_batch_tensor = tf.expand_dims(image_tensor, axis=0)

#Use slim's alexnet_v2 implementation

network_fn = nets_factory.get_network_fn('alexnet_v2',num_classes=2,is_training=False)

#Use inception preprocessing

preprocessing_name = 'inception'
image_preprocessing_fn= preprocessing_factory.get_preprocessing(preprocessing_name,is_training=False)




initializer = tf.local_variables_initializer()


with tf.Session() as sess:
    image_np, pred_np =[image_tensor, pred], feed_dict={image_filename_placeholder: image_filename})

EDIT: After adding line in bold, the program no longer hangs. However I'm getting a placeholder error:

InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype string [[Node: Placeholder = Placeholderdtype=DT_STRING, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]]

I've double checked the spelling, and as far as I can see I'm feeding it correctly. What's wrong?


The tf.train.batch() function uses background threads to prefetch examples, but you need to add an explicit command (tf.train.start_queue_runners(sess)) to start these threads. Rewriting the last part of your code as follows should stop it hanging:

with tf.Session() as sess:

  # Starts background threads for input preprocessing.

  image_np, pred_np =
      [image_tensor, pred],
      feed_dict={image_filename_placeholder: image_filename})