'How to implement adding a new array parameter in custom loss function using keras?

def custom_loss(y, f, Qsim_t): # tf.cast(y, tf.float32) # tf.cast(f, tf.float32)

x_std = (y-min_v)/(max_v-min_v)
y = x_std*(f1-f2) + f2

x_std = (f-min_v)/(max_v-min_v)
f = x_std*(f1-f2) + f2

y = Qsim_t - y
f = Qsim_t - f

# y = np.array(Qsim_t) - y
# f = np.array(Qsim_t) - f
e = (y-f)
loss = K.mean(K.maximum(q*e, (q-1)*e), axis=-1)
return loss

    #LSTM
    input_layer = Input(shape=(X.shape[1],X.shape[2]))
    x = LSTM(50, activation='relu', return_sequences=True)(input_layer)
    x1 = LSTM(50, activation='relu', return_sequences=True)(x)
    out = Dense(1)(x1)
    
    layer_Qsim = Input(shape=(1,))
    target = Input(shape=(1,))

    model = Model(inputs = [target, input_layer, layer_Qsim], outputs = out)  
    model.add_loss(custom_loss(target, out, layer_Qsim))
    model.compile(loss=None, optimizer = 'adam')
    history = model.fit(x=[Y, X, np.array(Qsim_t)],y=None, shuffle=False, epochs=3)

Hi, my loss function need to calculate the custom error between (Qsim_t-y_true) and (Qsim_t-y_pred). So I add the numpy array Qsim_t into the loss function. However, when I fit the custom loss, one error occurs:

InvalidArgumentError: Incompatible shapes: [32,1] vs. [32,12,1] [[node gradient_tape/model/tf.math.subtract_3/BroadcastGradientArgs (defined at /opt/conda/lib/python3.7/site-packages/keras/optimizer_v2/optimizer_v2.py:464) ]] [Op:__inference_train_function_593472]

Errors may have originated from an input operation. Input Source operations connected to node gradient_tape/model/tf.math.subtract_3/BroadcastGradientArgs: In[0] gradient_tape/model/tf.math.subtract_3/Shape: In[1] gradient_tape/model/tf.math.subtract_3/Shape_1:

Not sure how could I fix this? Thanks.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source