'Accuracy decreases with increasing batches in neural network
I am training a large dataset using Keras neural network. I observed that the accuracy is high in initial batches but it decreases with higher batches.
Following is my code-
model_nn = keras.Sequential()
model_nn.add(Dense(352, input_dim=28, activation='relu',kernel_regularizer=l2(0.001)))
model_nn.add(Dense(384, activation='relu',kernel_regularizer=l2(0.001)))
model_nn.add(Dense(288, activation='relu',kernel_regularizer=l2(0.001)))
model_nn.add(Dense(448, activation='relu',kernel_regularizer=l2(0.001)))
model_nn.add(Dense(320, activation='relu',kernel_regularizer=l2(0.001)))
model_nn.add(Dense(1, activation='sigmoid'))
model_nn.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(learning_rate=0.0001), metrics=['accuracy'])
history=model_nn.fit(X_train1, y_train1,validation_data=(X_test,y_test),epochs=300, batch_size=100, verbose = 1 )
Below is the accuracy in starting and final batches-
1/80410 [..............................] - ETA: 24:12 - loss: 0.4017 - accuracy: 0.8500
2/80410 [..............................] - ETA: 1:14:13 - loss: 0.4294 - accuracy: 0.8300
3/80410 [..............................] - ETA: 1:24:09 - loss: 0.4797 - accuracy: 0.7933
...
80408/80410 [============================>.] - ETA: 0s - loss: 0.5229 - accuracy: 0.7531
80409/80410 [============================>.] - ETA: 0s - loss: 0.5229 - accuracy: 0.7531
I observe a similar trend in many epochs, What could be the reason behind this?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
