'Why My Deep Learning Model is not changing neither its loss nor its accuracy

I am trying to train a CNN model with 2030 preprocessed eye images. the shape of my input data is (2030, 200,200, 1). At first, the shape of the data was 1527. Then I used imblearn.over_sampling.RandomOverSampler to increase the dataset size. I constructed the model with Keras and here is the summary of my model:

model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', activation='relu', 
input_shape=(img_cols, img_rows, 1)))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))


model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))


optimizer = tf.keras.optimizers.SGD(learning_rate=0.000001)
model.compile(optimizer=optimizer,loss='binary_crossentropy', 
metrics=[tf.keras.metrics.SpecificityAtSensitivity(0.5), 
tf.keras.metrics.SensitivityAtSpecificity(0.5),'accuracy'])

# Augmentation

train_datagen = ImageDataGenerator(
    rescale=1./255,
    horizontal_flip=True,
    vertical_flip=True,
    width_shift_range=0.3,
    height_shift_range=0.5,
    rotation_range=10,
    zoom_range=0.2
    )

test_datagen = ImageDataGenerator(rescale=1./255)

train_data = train_datagen.flow(x_train, y_train)
test_data = test_datagen.flow(x_test, y_test)

reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', 
factor=0.9,
                          patience=2, min_lr=0.0000000000000000001)
history=model.fit(train_data, epochs=10, batch_size=32, 
validation_data=test_data, callbacks=[reduce_lr]) 

I trained the model with different parameters (with batch sizes 32, 64, 128, 256, 512, 1024, adding 128 and 256 neuron convolution layers, decreasing and increasing learning rate, using callbacks, varying the dense layer size with 32, 64, ..., 1024) but I always get the following learning process:

*Epoch 1/10
51/51 [==============================] - 14s 238ms/step - loss: 0.6962 - specificity_at_sensitivity_15: 0.4548 - sensitivity_at_specificity_15: 0.4777 - accuracy: 0.4969 - val_loss: 0.6957 - val_specificity_at_sensitivity_15: 0.4112 - val_sensitivity_at_specificity_15: 0.3636 - val_accuracy: 0.4852 - lr: 1.0000e-04
Epoch 2/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6945 - specificity_at_sensitivity_15: 0.4829 - sensitivity_at_specificity_15: 0.4615 - accuracy: 0.5018 - val_loss: 0.6949 - val_specificity_at_sensitivity_15: 0.4467 - val_sensitivity_at_specificity_15: 0.3206 - val_accuracy: 0.4877 - lr: 1.0000e-04
Epoch 3/10
51/51 [==============================] - 12s 227ms/step - loss: 0.6955 - specificity_at_sensitivity_15: 0.4328 - sensitivity_at_specificity_15: 0.4082 - accuracy: 0.5043 - val_loss: 0.6945 - val_specificity_at_sensitivity_15: 0.5584 - val_sensitivity_at_specificity_15: 0.5167 - val_accuracy: 0.4852 - lr: 1.0000e-04
Epoch 4/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6971 - specificity_at_sensitivity_15: 0.4034 - sensitivity_at_specificity_15: 0.4256 - accuracy: 0.5049 - val_loss: 0.6941 - val_specificity_at_sensitivity_15: 0.4010 - val_sensitivity_at_specificity_15: 0.3923 - val_accuracy: 0.4852 - lr: 1.0000e-04
Epoch 5/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6954 - specificity_at_sensitivity_15: 0.4670 - sensitivity_at_specificity_15: 0.4640 - accuracy: 0.4969 - val_loss: 0.6938 - val_specificity_at_sensitivity_15: 0.5584 - val_sensitivity_at_specificity_15: 0.5407 - val_accuracy: 0.4729 - lr: 1.0000e-04
Epoch 6/10
51/51 [==============================] - 12s 227ms/step - loss: 0.6972 - specificity_at_sensitivity_15: 0.4352 - sensitivity_at_specificity_15: 0.3883 - accuracy: 0.4791 - val_loss: 0.6935 - val_specificity_at_sensitivity_15: 0.4772 - val_sensitivity_at_specificity_15: 0.3206 - val_accuracy: 0.4729 - lr: 1.0000e-04
Epoch 7/10
51/51 [==============================] - 12s 227ms/step - loss: 0.6943 - specificity_at_sensitivity_15: 0.4474 - sensitivity_at_specificity_15: 0.4814 - accuracy: 0.5031 - val_loss: 0.6933 - val_specificity_at_sensitivity_15: 0.3604 - val_sensitivity_at_specificity_15: 0.4880 - val_accuracy: 0.4729 - lr: 1.0000e-04
Epoch 8/10
51/51 [==============================] - 12s 225ms/step - loss: 0.6974 - specificity_at_sensitivity_15: 0.4609 - sensitivity_at_specificity_15: 0.4355 - accuracy: 0.4926 - val_loss: 0.6930 - val_specificity_at_sensitivity_15: 0.5279 - val_sensitivity_at_specificity_15: 0.5885 - val_accuracy: 0.4655 - lr: 1.0000e-04
Epoch 9/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6945 - specificity_at_sensitivity_15: 0.4425 - sensitivity_at_specificity_15: 0.4777 - accuracy: 0.5031 - val_loss: 0.6929 - val_specificity_at_sensitivity_15: 0.4619 - val_sensitivity_at_specificity_15: 0.3876 - val_accuracy: 0.4655 - lr: 1.0000e-04
Epoch 10/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6977 - specificity_at_sensitivity_15: 0.4389 - sensitivity_at_specificity_15: 0.4367 - accuracy: 0.4766 - val_loss: 0.6927 - val_specificity_at_sensitivity_15: 0.6091 - val_sensitivity_at_specificity_15: 0.5024 - val_accuracy: 0.4951 - lr: 1.0000e-04*

And evaluation with test data generated from x_test data (2 percent of the 2030 images) resulted in:

13/13 [==============================] - 1s 69ms/step - loss: 0.6927 - specificity_at_sensitivity_15: 0.6091 - sensitivity_at_specificity_15: 0.5024 - accuracy: 0.4951
Accuracy score is : 0.4950738847255707

How can I improve my accuracy score? I tried every possible way, but the maximum I could increase was 53%. Similar codes I saw on the internet reached 76%. It is a medical imaging project, I believe it is better to obtain better accuracy.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source