Category "recurrent-neural-network"

Tensorflow-addons seq2seq - start and end tokens in BaseDecoder or BasicDecoder

I am writing code inspired from https://www.tensorflow.org/addons/api_docs/python/tfa/seq2seq/BasicDecoder. In the translation/generation we instantiate a Basic

logits and labels must be broadcastable error in Tensorflow RNN

I am new to Tensorflow and deep leaning. I am trying to see how the loss decreases over 10 epochs in my RNN model that I created to read a dataset from kaggle w

which algorithm does google keyboard uses for automatic suggestions (personal vocab included)?

I am confused since google cannnot train their text generation models with each individuals personal vocabulary. I was trying to develop something similar but

How does calculation in a GRU layer take place

So I want to understand exactly how the outputs and hidden state of a GRU cell are calculated. I obtained the pre-trained model from here and the GRU layer has

AttributeError: module 'tensorflow.python.pywrap_tensorflow' has no attribute 'TFE_Py_RegisterExceptionClass'

I am trying to develop some time-series sequence prediction, using the latest resources available. To that end, I did check the example code from TensorFlow tim

LSTM/GRU setting states to random noise instead or resetting to zero

I train the following model based on GRU, note that I am passing the argument stateful=True to the GRU builder. class LearningToSurpriseModel(tf.keras.Model):

making GRU/LSTM states trainable in Tensorflow/Keras and add some random noise

I train the following model based on GRU, note that I am passing the argument stateful=True to the GRU builder. class LearningToSurpriseModel(tf.keras.Model):

Tensorflow LSTM/GRU reset states once per epoch and not for each new batch

I train the following model based on GRU, note that I am passing the argument stateful=True to the GRU builder. class LearningToSurpriseModel(tf.keras.Model):