'Trying to implement Random Search for a LSTM model
I am implementing a random search for the LSTM architecure for forecasting problem. So, for example, if I train the model only once, it gives me a mean absolute error of 0.023, then in the same script I train a new model again but with the same parameters the result is different, I have a mean absolute error greater than 0.024 and if I do more training the mean absolute error increases. After each training I use keras.backend.clear_session() and try to save each model in a new variable despite having the same parameters, also try: initial_epoch=0. And it is not only when the same model is trained, for example if I train a different model(parameters like units or layers) I get a mean absolute error of 0.025, and if I want to train the model of the previous example the mean absolute error is another value different and greatter from the expected one if only to that I see a similar post with no response: Looping Different Data Sets and Predicting with LSTM and this Iterating through different data sets with LSTM without retaining memories of previous training Does anyone know how to solve this problem or a way to implement GridSearch for an LSTM model?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
