Why is the accuracy for my Keras model always 0 when training?

accuracy zero keras
keras metrics
keras accuracy example
loss vs accuracy keras
keras accuracy score
val accuracy keras
keras balanced accuracy
keras loss: nan

I'm pretty new to keras I have built a simple network to try:

import numpy as np;

from keras.models import Sequential;
from keras.layers import Dense,Activation;

data= np.genfromtxt("./kerastests/mydata.csv", delimiter=';')
x_target=data[:,29]
x_training=np.delete(data,6,axis=1)
x_training=np.delete(x_training,28,axis=1)

model=Sequential()
model.add(Dense(20,activation='relu', input_dim=x_training.shape[1]))
model.add(Dense(10,activation='relu'))
model.add(Dense(1));

model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
model.fit(x_training, x_target)

From my source data, I have removed 2 columns, as you can see. One is a column that came with dates in a string format (in the dataset, besides it, I have a column for the day, another for the month, and another for the year, so I don't need that column) and the other column is the column I use as target for the model).

When I train this model I get this output:

32/816 [>.............................] - ETA: 23s - loss: 13541942.0000 - acc: 0.0000e+00
800/816 [============================>.] - ETA: 0s - loss: 11575466.0400 - acc: 0.0000e+00 
816/816 [==============================] - 1s - loss: 11536905.2353 - acc: 0.0000e+00     
Epoch 2/10
 32/816 [>.............................] - ETA: 0s - loss: 6794785.0000 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5381360.4314 - acc: 0.0000e+00     
Epoch 3/10
 32/816 [>.............................] - ETA: 0s - loss: 6235184.0000 - acc: 0.0000e+00
800/816 [============================>.] - ETA: 0s - loss: 5199512.8700 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5192977.4216 - acc: 0.0000e+00     
Epoch 4/10
 32/816 [>.............................] - ETA: 0s - loss: 4680165.5000 - acc: 0.0000e+00
736/816 [==========================>...] - ETA: 0s - loss: 5050110.3043 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5168771.5490 - acc: 0.0000e+00     
Epoch 5/10
 32/816 [>.............................] - ETA: 0s - loss: 5932391.0000 - acc: 0.0000e+00
768/816 [===========================>..] - ETA: 0s - loss: 5198882.9167 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5159585.9020 - acc: 0.0000e+00     
Epoch 6/10
 32/816 [>.............................] - ETA: 0s - loss: 4488318.0000 - acc: 0.0000e+00
768/816 [===========================>..] - ETA: 0s - loss: 5144843.8333 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5151492.1765 - acc: 0.0000e+00     
Epoch 7/10
 32/816 [>.............................] - ETA: 0s - loss: 6920405.0000 - acc: 0.0000e+00
800/816 [============================>.] - ETA: 0s - loss: 5139358.5000 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5169839.2941 - acc: 0.0000e+00     
Epoch 8/10
 32/816 [>.............................] - ETA: 0s - loss: 3973038.7500 - acc: 0.0000e+00
672/816 [=======================>......] - ETA: 0s - loss: 5183285.3690 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5141417.0000 - acc: 0.0000e+00     
Epoch 9/10
 32/816 [>.............................] - ETA: 0s - loss: 4969548.5000 - acc: 0.0000e+00
768/816 [===========================>..] - ETA: 0s - loss: 5126550.1667 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5136524.5098 - acc: 0.0000e+00     
Epoch 10/10
 32/816 [>.............................] - ETA: 0s - loss: 6334703.5000 - acc: 0.0000e+00
768/816 [===========================>..] - ETA: 0s - loss: 5197778.8229 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5141391.2059 - acc: 0.0000e+00    

Why is this happening? My data is a time series. I know that for time series people do not usually use Dense neurons, but it is just a test. What really tricks me is that accuracy is always 0. And, with other tests, I did even lose: gets to a "NAN" value.

Could anybody help here?

Keras Numpy Error: Why does accuracy go to zero?, You're trying to compute the accuracy of a regression problem, i.e. the portion of examples on which the output is floating-point equal to the  I can run the same code, without changing it, several times in a row and get this problem and only rarely do I get a run where the neural network is trained properly and rises to and above 0.9 accuracy for training and validation.

Add following to get metrics:

   history = model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_squared_error'])
   # OR
   history = model.compile(optimizer='adam', loss='mean_absolute_error', metrics=['mean_absolute_error'])
   history.history.keys()
   history.history

tensorflow - Keras accuracy for my model always 0 when training -, import numpy np; keras.models import sequential; keras.layers import dense,​activation; data= np.genfromtxt("./kerastests/mydata.csv"  A Keras model has two modes: training and testing. Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time. Besides, the training loss is the average of the losses over each batch of training data.

Try this one.

while trying to solve the Titanic problem from kaggle, I forgot to fill the missing data from the Dataframe, because of which the missing data was filled with "nan".

The model threw a similar output

#------------------------------------------------------

Epoch 1/50

891/891 [==============================] - 3s 3ms/step - loss: 9.8239 - acc: 0.0000e+00

Epoch 2/50

891/891 [==============================] - 1s 2ms/step - loss: 9.8231 - acc: 0.0000e+00

Epoch 3/50

891/891 [==============================] - 1s 1ms/step - loss: 9.8231 - acc: 0.0000e+00

Epoch 4/50

891/891 [==============================] - 1s 1ms/step - loss: 9.8231 - acc: 0.0000e+00

Epoch 5/50

891/891 [==============================] - 1s 1ms/step - loss: 9.8231 - acc: 0.0000e+00

#------------------------------------------------------

Make sure you prepare your data before feeding it to the model.

In my case I had to do the following changes

+++++++++++++++++++++++++++++++++++

dataset[['Age']] = dataset[['Age']].fillna(value=dataset[['Age']].mean())

dataset[['Fare']] = dataset[['Fare']].fillna(value=dataset[['Fare']].mean())

dataset[['Embarked']] = dataset[['Embarked']].fillna(value=dataset['Embarked'].value_counts().idxmax())

accuracy has always been zero · Issue #505 · qqwweee/keras-yolo3 , When I pass the accuracy metric in the model.compile() function in train.py. model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda  Keras: Starting, stopping, and resuming training. 2020-06-05 Update: This blog post is now TensorFlow 2+ compatible! In the first part of this blog post, we’ll discuss why we would want to start, stop, and resume training of a deep learning model.

I would like to point out something that is very important and has been unfortunately neglected: mean_squared_error is not an invalid loss function for classification.

The mathematical properties of cross_entropy in conjunction with the assumptions of mean_squared_error(both of which I will not expand upon in this comment) make the latter inappropriate or worse than the cross_entropy when it comes to training on classification problems.

Keras accuracy for my model always 0 when training, I'm pretty new to keras I have built a simple network to try: import numpy as np;; from keras.models import Sequential;; from keras.layers import Dense,Activation;​  Validation accuracy is just 0.0000e+00 and also training accuracy is approximately 37%. What could have possibly gone wrong? My training set has 10500 rows and 172 columns My test set has 3150 rows and 172 columns My first column is the response (class) and hence i use it only as Y and the rest columns as X.

Metrics, A metric is a function that is used to judge the performance of your model. Note that the best way to monitor your metrics during training is via TensorBoard. so far. if step % 100 == 0: print('Step:', step) print('Total running accuracy so far: %.3f' any callable with signature metric_fn(y_true, y_pred) that returns an array of  The validation accuracy can be lower for multiple reasons - I'd look at imbalanced classes, overfitting, not enough data if there is a large discrepancy between the training set. The accuracy you cite is likely the training accuracy.As to why training accuracy is being calculated at all - you feed your input aka features into the network and

Keras accuracy does not change, If your learning rate reaches 1e-6 and it still doesn't work, then you have another problem. In summary, replace this line: model.compile(loss  Yes, it is entirely possible to get high accuracy on first epoch and then only modest improvements. If there is enough redundancy in the data and you make enough updates (wrt. the complexity of your model, which seems fairly easy to optimize) in the first epoch (i.e. you use small minibatches), it's entirely possible that you learn most of the important stuff during the first epoch. When you

Zero validation accuracy when training with model.fit, I'm relatively new to Keras and trying to train a neural net to classify a set of very this use case I get validation accuracy of zero as the output of the training. If not, it could it be possible that your validation set is taken from a  If the accuracy is not changing, it means the optimizer has found a local minimum for the loss. This may be an undesirable minimum. One common local minimum is to always predict the class with the most number of data points. You should use weighting on the classes to avoid this minimum.

Comments
  • Why do you say my output layer is using relu? From the sources it seems that if you don't specify anything there's no activation function. Am I missing anything? On the other hand, I can only use 'accuracy' when using a classification model? I'm a bit messed up at this time :)
  • @Notbad the default activation function is indeed appears to be linear. I updated my answer. However, that doesn't change the fact that as it currently written, the model corresponds to a regression problem. Regarding your second question, 'accuracy' indeed can be used only for classification (as it measures the percentage of the correct labels).
  • Here are a couple of links that I found helpful when trying to understand regression models, working with Keras. The explanations in this Github issue are extremely useful to someone who is just getting started. Also, here is a interesting SO question.
  • What is the point of adding the loss functions as a metric another time?