'WARNING:tensorflow:6 out of the last 74 calls to <function Model.make_predict_function.<locals>

I am getting the following warning

WARNING:tensorflow:6 out of the last 74 calls to <function Model.make_predict_function..predict_function at 0x00000174C6C6E430> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing.

when I run the following code

#................................define model...........................
model =Sequential()
model.add(LSTM(100, activation='relu', input_shape=(n_input,n_features)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()

for k, v in enumerate(nse.get_fno_lot_sizes()):
    if v not in ('^NSEI','NIFTYMIDCAP150.NS','NIFTY_FIN_SERVICE.NS','^NSEBANK'):

        df = getData(totalRows=2520,freqDays=freqDays,fileName=v+'.NS')

        #-----------Create Training--------------------
        train = df[['close']].iloc[:int(len(df)*0.8)]
        scaler = MinMaxScaler()
        scaler.fit(train)
        scaled_train = scaler.transform(train)
         
        #------------------------------------------------------
        generator = TimeseriesGenerator(scaled_train,scaled_train,length=n_input, batch_size=1)

        #-----------------------------------------------------
        #fit model
        model.fit(generator,epochs=10)

        #new pred
        new_pred = []
        first_eval_batch =scaler.transform(df[['close']].iloc[-n_input:])
        current_batch = first_eval_batch.reshape((1, n_input, n_features))
        current_pred = model.predict(current_batch)[0]
        new_pred.append(current_pred) 
        current_pred = scaler.inverse_transform(new_pred)[0][0]
        print(current_pred)

should I define the model inside the for loop for every new training data?

Is there a better way to do this? Basically I am trying to train model on new data in a loop and predict

And after iterating for a while I am getting nan as loss for model.fit(generator,epochs=10)

like this

Epoch 1/10 480/480 [==============================] - 8s 16ms/step -
loss: nan 
Epoch 2/10 480/480 [==============================] - 6s
13ms/step - loss: nan 
Epoch 3/10 480/480 [==============================] - 6s 13ms/step - loss: nan Epoch 4/10


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source