'What is the difference between enc_out_dim and latent_dim in variational autoencoders

I am reading this article and have question concerning dimensions of latent space and enc_out_dim

class VAE(pl.LightningModule):



 def __init__(self, enc_out_dim=512, latent_dim=256, input_height=32):
        super().__init__()

        self.save_hyperparameters()

        # encoder, decoder
        self.encoder = resnet18_encoder(False, False)
        self.decoder = resnet18_decoder(
            latent_dim=latent_dim,
            input_height=input_height,
            first_conv=False,
            maxpool1=False
        )

        # distribution parameters
        self.fc_mu = nn.Linear(enc_out_dim, latent_dim)
        self.fc_var = nn.Linear(enc_out_dim, latent_dim)

        # for the gaussian likelihood
        self.log_scale = nn.Parameter(torch.Tensor([0.0]))

Can you explain here what is difference between enc_out_dim and latent_dim? I want to implement variational autoencoder with tree_lstm as encoder and I am not sure how give values enc_out_dim and latent_dim.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source