'Seq2seq trains with LSTM, but now with GRU: not enough values to unpack (expected 3, got 2)

I am trying to run seq2seq model and it works fine when I use LSTM as encoder/decoder, but it returns an error when I replace LSTM with GRU:

---> 14     encoder_outputs, state_h, state_c = encoder(encoder_inputs)
     15     states = [state_h, state_c]
     16 

ValueError: not enough values to unpack (expected 3, got 2)

I thought GRU and LSTM are very similar (although I do not know much about them), not sure why this error suddenly appears? Any explanation would be appreciated, thank you!

This is the full code:

def seq2seq(feature_len=5, after_day=1, input_shape=(30, 5)):

    # Encoder
    encoder_inputs = Input(shape=input_shape) # (timesteps, feature)
    encoder = tf.compat.v1.keras.layers.CuDNNGRU(units=100, return_state=True,  name='encoder', recurrent_initializer='glorot_uniform')
    encoder_outputs, state_h, state_c = encoder(encoder_inputs)
    states = [state_h, state_c]

    # Decoder
    reshapor = Reshape((1, 100), name='reshapor')
    decoder = tf.compat.v1.keras.layers.CuDNNGRU(units=100, return_sequences=True, return_state=True, name='decoder', recurrent_initializer='glorot_uniform')

    # Densor
    densor_output = Dense(units=1, activation='linear', name='output')

    inputs = reshapor(encoder_outputs)
    all_outputs = []

    for _ in range(after_day):
        outputs, h, c = decoder(inputs, initial_state=states)

        #inputs = tdensor(outputs)
        inputs = outputs
        states = [state_h, state_c]

        outputs = densor_output(outputs)
        all_outputs.append(outputs)

    #decoder_outputs = Lambda(lambda x: K.concatenate(x, axis=1))(all_outputs)
    decoder_outputs = all_outputs
    model = Model(inputs=encoder_inputs, outputs=decoder_outputs)

    return model


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source