'Accuracy reduce in Concatenate LSTM when epochs size is increased

First I'm new to these concepts. I used two lstm layers and concatenation. but when I train the model with 2 epochs it works fine(I believe because it gives around 90% accuracy). but when I increase the epochs size(epochs=10) val_accuracy which shows while the training remains same(0.5601) and val_loss(6.7802) is also higher and remain in same. but when I train using 2 epochs, val_accuracy and val_loss is fine(val_accuracy=0.9924, val_loss=0.0392(not remain in same they have changed))

this is the code for concatenation LSTM

dataset was preprocessed using one-hot encoding. and embedded layer length is 50.

inputs = Input(shape=(50))
lstm1 = Embedding(voc_size,embedding_vector_features,input_length=sent_length)(inputs)
lstm1 = LSTM(50)(lstm1)
lstm1 = Dense(1,activation='sigmoid')(lstm1)

lstm2 = Embedding(voc_size,embedding_vector_features,input_length=sent_length)(inputs)
lstm2 = LSTM(50)(lstm2)
lstm2 = Dense(1,activation='sigmoid')(lstm2)

concatenated = Concatenate(name='concatenate_1')([lstm1,lstm2])
output1 = Dense(1, name='dense_1')(concatenated)
model = Model(inputs=inputs, outputs=output1)
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(x_train,y_train,validation_data=(x_test,y_test),epochs=20,batch_size=32)

any solution for resolve this issue. thanks



Solution 1:[1]

  • Try to increase Embedding Dimensions
  • Add some regularization such as Dropout, BatchNormalization.
  • Change the Learning rate to smaller value, reduce batch_size

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 k.avinash