'Do we need to define model everytime we need to train data in LSTM?

Suppose if I have two datasets where 1st dataset is AAPL stock price and 2nd dataset is GOOGL stock price.

Now if I define the model as

model =Sequential()
model.add(LSTM(100, activation='relu', input_shape=(n_input,n_features)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()

and then train and fit it on first dataset

    df = pd.read_csv('data\\AAPL.csv', index_col = 0)

    train = df[['close']].iloc[:int(len(df)*0.8)]
    scaler = MinMaxScaler()
    scaler.fit(train)
    scaled_train = scaler.transform(train)
     
    #------------------------------------------------------
    generator = TimeseriesGenerator(scaled_train,scaled_train,length=n_input, batch_size=1)

    #-----------------------------------------------------
    #fit model
    model.fit(generator,epochs=10)

then if I have to fit the model on second dataset do I need to define it again?

if not then why does the output of model.fit differs when I define the model again before fitting it on second dataset?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source