'ValueError: Unexpected result of `predict_function` (Empty batch_outputs). Please use `Model.compile(..., run_eagerly=True)`,
I tried using GridSearchCV() but I also got errors so I wrote a very time consuming code that returns the metrics I want to evaluate my model. Here it is :
def df_to_new(df,window_size):
df_as_np = df.to_numpy()
X = []
y = []
for i in range(len(df_as_np)-window_size):
row = [[a] for a in df_as_np[i:i+window_size]]
X.append(row)
label = df_as_np[i+window_size]
y.append(label)
return np.array(X),np.array(y)
from tensorflow.keras.models import load_model
import os
checkpointpath = 'C:\\Users\\USER\\trainingFORLOOP_daily/cp.ckt'
cp = ModelCheckpoint(checkpointpath, save_best_only=True,verbose=1)
EPOCH = [30,150,300]
learningRates = [0.0001,0.00001]
batchSize = [15,20,40]
win_size = [5,15,25]
dropout_rate = 0.2
num_features = 1
for i in learningRates:
for j in EPOCH:
for k in batchSize:
for l in win_size:
X,y = df_to_new(Ac13,l)
#Split the data
perc_train = 0.8
limit_train = int(np.floor(len(Ac13)*perc_train))
xtrain,ytrain = X[:limit_train],y[:limit_train]
xval,yval = X[limit_train:],y[limit_train:]
#create the model
model1 = Sequential()
model1.add(InputLayer((l,1)))
model1.add(LSTM(128))
model1.add(Dropout(dropout_rate))
model1.add(Dense(86,'relu'))
model1.add(Dropout(dropout_rate))
model1.add(Dense(1,'linear'))
model1.summary()
model1.compile(loss=MeanSquaredError(),optimizer =
Adam(learning_rate=i),
metrics=[RootMeanSquaredError()],run_eagerly=True)
model1.fit(xtrain,ytrain,validation_data=(xval,yval),batch_size=k,epochs=j,callbacks=[cp],shuffle=False)
model1.save("my_model")
model1 = load_model("my_model")
train_predictions = model1.predict(xtrain).flatten()
train_results = pd.DataFrame(data={'TrainPredictions':train_predictions,'Actual values':ytrain})
train_results
scale = len(train_predictions)
val_predictions = model1.predict(xval).flatten()
val_results = pd.DataFrame(data='ValidatePredictions':val_predictions,'Validation values':yval})
I am getting the following error (Full traceback):
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_19052/292377201.py in <module>
51 plt.legend(bbox_to_anchor =(0.75, 1.15), ncol = 2)
52 plt.show()
---> 53 val_predictions = model1.predict(xval).flatten() # flatten() removes the brackets inside the data
54 val_results = pd.DataFrame(data={'Validate Predictions':val_predictions,'Validation values':yval}) #yval are the actual values
55 val_results
~\anaconda3\envs\tf-gpu-cuda8\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
~\anaconda3\envs\tf-gpu-cuda8\lib\site-packages\keras\engine\training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
1995 callbacks.on_predict_batch_end(end_step, {'outputs': batch_outputs})
1996 if batch_outputs is None:
-> 1997 raise ValueError('Unexpected result of `predict_function` '
1998 '(Empty batch_outputs). Please use '
1999 '`Model.compile(..., run_eagerly=True)`, or '
ValueError: Unexpected result of `predict_function` (Empty batch_outputs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`.
Any suggestions? I did the same approach but with hourly data and it worked quite well without any errors. The hourly code is the same as the daily code(this one) the only thing that changed is that the data in the hourly code was summed in a day and the daily code data was obtained.
Solution 1:[1]
There are a few things wrong with your code.
You are creating your model inside the training loop. So that’s wrong. Unless you want to train a new version at each epoch. Your model should be defined and compiled before you start training.
Depending on which version of tensorflow you you are using you should enable eager execution before you declare the model. That lets you train the model without creating a session and a scope.
Hope this helps!
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Amruta Muthal |
