'Neural network Hyper-parameters Optimization and Sensitivity Analysis

I am working on very large dataset in Keras with a single-output neural network. Upon a change in depth of the network, I observed some improvements in the performance of the model. Therefore, I wanted to perform ""A systematic"" research-wise hyper-parameter optimization now (hidden layers, activation functions, # neurons, epochs, batch size, etc.). However, I was told that GridSearchCV and RandomSearchCV are not proper options since my dataset is large. I was wondering if any of you have experience in this regard or have feedback which may direct me to the right path.



Solution 1:[1]

use a confusion matrix and heat map to measure performance accuracy of your network

Y_pred=model.predict(X_test)
Y_pred2=np.argmax(Y_pred, axis=1)
Y_test2=np.argmax(Y_test, axis=1)
cm = confusion_matrix(Y_test2, Y_pred2)
sns.heatmap(cm)
plt.show()

print(classification_report(Y_test2, Y_pred2,target_names=label_names))

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Golden Lion