Hot questions for Using Neural networks in tensorflow serving


I' trying to construct neural network in tensorflow with tf.contrib.estimator but

 logits = tf.reduce_mean(conv2, axis=[1, 2])

    y = tf.argmax(logits, axis=1),
    # If prediction mode, early return
    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode, predictions=y)

    loss_op = tf.losses.softmax_cross_entropy(onehot_labels=y_onehot, logits=logits)
    optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
    train_op = optimizer.minimize(loss_op, global_step=tf.train.get_global_step())

    # Add evaluation metrics (for EVAL mode)
    acc_op = tf.contrib.metrics.accuracy(labels=y_, predictions=tf.cast(y, tf.uint8))

returns an error:

 raise TypeError('{} must be Tensor, given: {}'.format(tensor_name, x)) TypeError: predictions must be Tensor, given: (<tf.Tensor  'ArgMax:0' shape=(10,) dtype=int64>,)


Problem was in comma at the ending

y = tf.argmax(logits, axis=1),


I'm currently experimenting with superresolution using CNNs. To serve my model I'll need to frezze it first, into a .pb file, right? Being a newbie I don't really know how to do that. My model basically goes like this:

low res input image -> bicubic scaling (2x) -> fed to CNN -> CNN output image with the same (2x) resolution.

My model has 3 simple layers. The output layer is called "output". You can find the model here:

It saves its progress like so:

  • checkpoint
  • network_params.index
  • network_params.meta

I see to ways of doing this.

First: Follow this tutorial:

This seems to be made for multiple output nodes (for identification) and not for superresolution which only has one output. I don't know how to modify that script for my usage.

Second: Use

Again, I'm totally lost on how to use this with my model. All examples seem to be based on the MNIST tutorial.



Don't understand what you mean but in the metaflow article, he's also using one output nodes. You can add several depending on how you name your tensor.

In you case, have a look at the You need to look at the output_layer:

    self.output = self.conv_layer("reconstruction_layer", self.layer_params[-1],
                                  non_linear_mapping_layer, linear=True)

As you can see it's already name due to conv_layer, so in the metaflow code, you need to do something like this:

def freeze_graph(model_folder):
    # We retrieve our checkpoint fullpath
    checkpoint = tf.train.get_checkpoint_state(model_folder)
    input_checkpoint = checkpoint.model_checkpoint_path

    # We precise the file fullname of our freezed graph
    absolute_model_folder = "/".join(input_checkpoint.split('/')[:-1])
    output_graph = absolute_model_folder + "/frozen_model.pb"

    # Before exporting our graph, we need to precise what is our output node
    # This is how TF decides what part of the Graph he has to keep and what part it can dump
    # NOTE: this variable is plural, because you can have multiple output nodes
    output_node_names = "reconstruction_layer"

Note: Sometimes it has a prefix in the naming like Accuracy is a prefix in the case of the metaflow article, Accuracy/predictions. Therefore, it would make sense to print out all the variables name that you stored in the checkpoint.

By the way, since TF 1.0 you can save your model with the SavedModelBuilder. This is the preferred way as it offers compatibility across multiple languages. The only caveats is that it is still not one single file but works well with Tensorflow Serving.