Why the accuracy of the training model is not changed in the tensorflow code?

tensorflow model predict
tensorflow training example
tensorflow accuracy not changing
tensorflow custom model
tensorflow dataset example
tensorflow custom dataset
keras loss not changing
tensorflow use trained model to predict

I am a novice of the tensorflow and python. I modified a sample tensorflow code by adding one hidden layer with 50 units, but the accuracy result turned to be wrong and it was not changed no matter how many times the model do training. I cannot find any problem with the code. The dataset is MNIST:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data", one_hot = True)

batch_size = 100
n_batch = mnist.train.num_examples // batch_size

x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])


W = tf.Variable(tf.zeros([784, 50]))
b = tf.Variable(tf.zeros([50]))

Wx_plus_b_L1 = tf.matmul(x,W) + b
L1 = tf.nn.relu(Wx_plus_b_L1)

W_2 = tf.Variable(tf.zeros([50, 10]))
b_2 = tf.Variable(tf.zeros([10]))

prediction = tf.nn.softmax(tf.matmul(L1, W_2) + b_2)


loss = tf.reduce_mean(tf.square(y - prediction))
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)


correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(prediction,1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

init = tf.global_variables_initializer()


with tf.Session() as sess:
   sess.run(init)
   for epoch in range(21):
    for batch in range(n_batch):
        batch_xs, batch_ys = mnist.train.next_batch(batch_size)
        sess.run(train_step, feed_dict={x:batch_xs, y:batch_ys})
    acc = sess.run(accuracy, feed_dict = {x:mnist.test.images, y:mnist.test.labels})
    print("Iter:" + str(epoch) + ", Testing Accuray:" + str(acc))

The output always be the same accuracy: Iter:0, Testing Accuray:0.1135 2018-05-31 18:05:21.039188: W tensorflow/core/framework/allocator.cc:101] Allocation of 31360000 exceeds 10% of system memory. Iter:1, Testing Accuray:0.1135 2018-05-31 18:05:22.551525: W tensorflow/core/framework/allocator.cc:101] Allocation of 31360000 exceeds 10% of system memory. Iter:2, Testing Accuray:0.1135 2018-05-31 18:05:24.070686: W tensorflow/core/framework/allocator.cc:101] Allocation of 31360000 exceeds 10% of system memory. What's wrong in this code? Thank you~~


i think it has to do with the graph. accuracy is never updated as the only op you are calling that gets updated change this code to

with tf.Session() as sess:
   sess.run(init)
   for epoch in range(21):
    for batch in range(n_batch):
        batch_xs, batch_ys = mnist.train.next_batch(batch_size)
        sess.run([train_step,accuracy], feed_dict={x:batch_xs, y:batch_ys})
    acc = sess.run(accuracy, feed_dict = {x:mnist.test.images, y:mnist.test.labels})
    print("Iter:" + str(epoch) + ", Testing Accuray:" + str(acc))

python, i think it has to do with the graph. accuracy is never updated as the only op you are calling that gets updated change this code to with tf.Session() as sess:  Code, Explained: Training a model in TensorFlow Jessica Yung 12.2016 Artificial Intelligence , Self-Driving Car ND Leave a Comment In a previous post, we went through the TensorFlow code for a multilayer perceptron .


The reason is I initialize all weights and bias to zero. If that so, all of the output of the neurons will be the same. The back propagation behavior of all neurons within the same layer is the same - the same gradient, weight update is the same.This is clearly an unacceptable result.

Custom training: walkthrough, Let's write that out in code: Since this function generates data for training models, the default behavior is to shuffle the Change the batch_size to set the number of examples stored in these feature arrays. But, the model hasn't been trained yet, so these aren't good predictions: An Iris classifier that is 80% accurate. There are two forms of quantization: post-training quantization and quantization aware training. Start with post-training quantization since it's easier to use, though quantization aware training is often better for model accuracy. This page provides an overview on quantization aware training to help you determine how it fits with your use case.


I have had the same problem with Titanic dataset. What helped was to change the learning rate:

optimize = tf.train.AdamOptimizer(learning_rate=0.000001).minimize(mean_loss)

When I changed it from 0.001, the accuracy finally started changing. Before that, I have tried to play with the number of layers, batch size, hidden layer size but nothing helped.

Loss not changing when training · Issue #2711 · keras-team , I have a model that I am trying to train where the loss does not go import tensorflow as tf Accuracy on training dataset was always okay. This will probably not solve your problem, but two remarks about your code: In your conv layers, you add the bias to the weights and use the result as the weights for the conv layer. Try tf.nn.bias_add(conv2d(data,cl1_desc['weights']), cl1_desc['biases']) instead.


acc and val_acc don't change? · Issue #1597 · keras-team , ETA: 0s - loss: 0.6281 - acc: 0.6800Epoch 02815: val_acc did not improve Loss and accuracy on the training set change from epoch to epoch, but the I've never experienced the same phenomenon using raw tensorflow so I think it's a Here is the code for the model after the test data has been split off: Also, we will see the training and accuracy of TensorFlow MNIST dataset. Here, we will learn how to create a function that is a model for recognizing handwritten digits by looking at each pixel in the image, then using TensorFlow to train the model to predict the image by making it look at thousands of examples which are already labeled


Debugging a Machine Learning model written in TensorFlow and , You can see the final (working) model on GitHub. I wrote up a convnet model borrowing liberally from the training loop of the ResNet model and reached an accuracy metric that was stubbornly stuck at this number That will simply print out the metadata of the tensor, not its values. Changing this to However, training become somehow erratic so accuracy during training could easily drop from 40% down to 9% on validation set. Accuracy on training dataset was always okay. Then I realized that it is enough to put Batch Normalisation before that last ReLU activation layer only, to keep improving loss/accuracy during training.


How to improve validation accuracy of model?, The last few blocks of code are: batch size as 8 Nunber I am not applying any augmentation to my training samples. Training accuracy only changes from 1st to 2nd epoch and then it stays at 0.3949. Using TensorFlow backend. VGG19  Build a model, Train this model on example data, and Use the model to make predictions about unknown data. Import TensorFlow and the other required Python modules. By default, TensorFlow uses eager execution to evaluate operations immediately, returning concrete values instead of creating a