'DNN - Find optimal dropout rate

Is there a way to find the optimal dropout rate for my DNN without retraining it?

maybe some subquestions:

  • is it smart to have a dropout after each dense layer?
  • would it be enough to have one single dropout layer at the end and just retrain the last layer and not the whole model?
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()

X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]

model_dropout = keras.models.Sequential()
model_dropout.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
    model_dropout.add(keras.layers.Dense(100, activation="relu"))
    model_dropout.add(keras.layers.Dropout(0.5)) #should be after each layer 

model_dropout.add(keras.layers.Dense(10, activation="softmax"))

# Compile the model
model_dropout.compile(loss="sparse_categorical_crossentropy",
              optimizer="adam",
              metrics=["accuracy"])

# Train the model
result_dropout = model_dropout.fit(X_train, y_train, epochs=100, validation_data=(X_valid, y_valid))

# Plot the learning curves
pd.DataFrame(result_dropout.history).plot(figsize=(8, 5))
plt.grid(True)
plt.show()

# Evaluate the model
model_dropout.evaluate(X_test, y_test)


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source