Category "lstm"

Work around Keras TypeError limitation when calling layer "tf.keras.backend.rnn_1"

I am trying to use Keras for an attention mechanism in a machine translation using an LSTM network. However, I get a TypeError exception when in my code. TypeEr

Dimension issues for LSTM sequence model on Keras

I'd like to train a simple LSTM model for sequence data with 128 time steps with 6 features for 118 multi-classes. The dimensions of the dataset are shown belo

How can I predict n number of future values using RNN which uses multiple features

I am having a use case where I need to predict n number of future values after using the given data. eg: I have data from Jan 1 2021 - Jan 1 2022. I need to pre

What is the from repeating same layers in AI architecture

I know that in ai models lstm is used to extract features from data and dropout is used to focus in the main ones, but i can't understand why people used to rep

Warning when fitting the LSTM model

When I try to fit my model i get an error. Here is the code: model = Sequential() model.add(LSTM(128, activation='relu', input_shape=(trainX.shape[1], trainX.sh

How to handel Imbalance data while using LSTM

I am doing a project on an online signature verification system using RNNs LSTM. In the project, I am facing a problem while using the signatures as LSTM trai

How to perform Recursive Feature Elimination (RFE) with LSTM as estimator in python?

I am trying to identify the important features in a data frame containing stock data. I plan on using LSTM to predict closing prices later on. I currently have

How to setup LSTM to use n-grams instead of sequence length?

I currently have an LSTM which uses sequence length as input, but this only allows the LSTM to predict when the input length is equal to the used sequence lengt

Training simple CNN-LSTM model

I have a task for my project paper and I do not get how to train the model. This model is supposed to take an image and segment it into different classes. The h

model.fit in a for loop, for K-fold cross validation

I am trying to code a K-fold cross validation with LSTM architecture. But I got an this error (edit): Traceback (most recent call last): File "/Users/me/Deskt

Tensorflow error: ValueError: Shapes (128, 100) and (128, 100, 139) are incompatible

I try to use Functional API for my model, but i don't understand why i have error: ValueError: Shapes (128, 100) and (128, 100, 139) are incompatible My code:

LSTM model fails

enter image description here model = Sequential() model.add(LSTM(units=32, return_sequences=True, input_shape=(training.shape[1],1))) model.add(Dropout(0.2)) mo

What is the prediction value of this LSTM neural network?

I just implemented a LSTM, but I'm not sure if I interpreted the structure right. is in this context testPredict = model.predict(Xtest) the last value of the se

PCA for Recurrent Neural Networks (LSTM) - Shall I use PCA for target variables too?

I have a seasonal timeseries dataset containing 3 target variables and n feature variables. I am trying to apply a PCA algorithm before feeding the data to a si

How to predict the stock price for the next 30 days after the LSTM model has predicted the test_set?

I've used a data-set containing closing price of a particular stock for 5 years.It has closing prices for 1231 days. The train_set consists of 987 days and the

Trying to copy an LSTM code, giving me an error "x and y must have same first dimension, but have shapes (1103,) and (275,)"

I'm trying to copy a LSTM model that I found from here: Stock Market-Predict volume with LSTM model I'm getting stuck on the last line of code. Specifically, th

ValueError: Input 0 of layer "lstm" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 1024)

I was following Transfer learning with YAMNet for environmental sound classification tutorial. Here is the link: https://www.tensorflow.org/tutorials/audio/tran

Missing required positional argument:

I tried to implement federated learning based on the LSTM approach. def create_keras_model(): model = Sequential() model.add(LSTM(32, input_shape=(3,1))

When predicting, shall we scale unseen inputs, and un-scale outputs of a model?

I am new to Machine Learning, and I followed this tutorial to implement LSTM model in Keras/Tensorflow: https://www.tensorflow.org/tutorials/structured_data/tim

How to train LSTM model with variable-length sequence input

I'm trying to train LSTM model in Keras using data of variable timestep, for example, the data looks like: <tf.RaggedTensor [[[0.0, 0.0, 0.0, 0.0, 0.0, 1.0,