Category "autoencoder"

why does the VQ-VAE require 2 Stage training?

According the the paper, VQ-VAE goes through two stage training. First to train the encoder and the vector quantization and then train an auto-regressive model

The lstm autoencoder does not use the full dimensions of the latent space for dimension reduction

I am trying to train a lstm autoencoder to convert the input space to a latent space and then visualize it, and I hope to find some interesting patterns in the

Faster way to do multiple embeddings in PyTorch?

I'm working on a torch-based library for building autoencoders with tabular datasets. One big feature is learning embeddings for categorical features. In pra