'loss stuck at 0.69 in word2vec lstm binary classification
i have a data with 4800 size balanced 2 class. when i training it the results show loss : 0.6932 and accuracy around 49-50% per epoch. here is my vector word2vec convert code :
vector_size = 100
w2v_weight_matrix = np.zeros((vocab_size ,vector_size))
w2v_weight_matrix.shape
for word, index in tokenizer.word_index.items():
if index < vocab_size: # since index starts with zero
if word in id_w2v.wv.key_to_index:
w2v_weight_matrix[index] = id_w2v.wv[word]
else:
w2v_weight_matrix[index] = np.zeros(100)
and here is my word2vec-lstm structure code :
model=Sequential()
model.add(Embedding(vocab_size, embed_dim, input_length=max_len, embeddings_initializer = Constant(w2v_weight_matrix), trainable = False))
model.add(Dropout(0.2))
model.add(LSTM(units=100))
model.add(Dropout(0.2))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
is anyone know what my mistakes and know the solution here? :( pardon my english..
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
