'keras giving same loss on every epoch

I am newbie to keras.

I ran it on a dataset where my objective was to reduce the logloss. For every epoch it is giving me the same loss value. I am confused whether i am on the right track or not.

For example:

Epoch 1/5
91456/91456 [==============================] - 142s - loss: 3.8019 - val_loss: 3.8278
Epoch 2/5
91456/91456 [==============================] - 139s - loss: 3.8019 - val_loss: 3.8278
Epoch 3/5
91456/91456 [==============================] - 143s - loss: 3.8019 - val_loss: 3.8278
Epoch 4/5
91456/91456 [==============================] - 142s - loss: 3.8019 - val_loss: 3.8278
Epoch 5/5
91456/91456 [==============================] - 142s - loss: 3.8019 - val_loss: 3.8278

Here 3.8019 is same in every epoch. It is supposed to be less.



Solution 1:[1]

Try decreasing your learning rate to 0.0001 and use Adam. What is your learning rate?

It's actually not clear to see if its the problem of learning rate or model complexity, could you explain a bit further with these instructions:

  1. What is your data size, what is it about?
  2. What is your model's complexity? We can compare your complexity with analysing your data. If your data is too big, you need more complex model.
  3. Did you normalize your outputs? For inputs, it couldn't be a big deal since not-normalization gives results, but if your outputs are some numbers bigger than 1, you need to normalize your data. If you check for your last layer's activation function of model, they're usually sigmoid, softmax, tanh that frequently squeezes your output to 0-1 and -1 - 1. You need to normalize your data according to your last activation function, and then reverse multiplying them for getting real-life result.

Since you're new to deep learning, these advices are enough to check if there's a problem on your model. Can you please check them and reply?

Solution 2:[2]

Usually this problem occurs when the model you are training does not have enough capacity (or the cost function is not appropriate). Or in some cases it happens that by mistake the data which we are feeding into the model is not prepared correctly and therefore the labels for each sample might not be correct which makes the model helpless and it will not able to decrease the loss.

Solution 3:[3]

I was having the same issue and was using the following model

model = Sequential([
    Dense(10, activation ='relu', input_shape=(n_cols, )),
    Dense(3, activation ='softmax')
    ])

I realized that my problem is actually a Regression problem and was using 'softmax' as the final layer activation (which is applicable for classification problem) instead of something else. When I modified the code as below I was able to resolve the issue of getting same loss values in every epoch

model = Sequential([
    Dense(10, activation ='relu', input_shape=(n_cols, )),
    Dense(3, activation ='relu'),
    Dense(1)
    ])

So, the problem was actually because of using a classification related activation function for a regression problem or vice versa. You may want to check if you are doing the same mistake by any chance.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Mehmet Burak Say?c?
Solution 2 Pranjal Sahu
Solution 3