'Embedding 3D data in Pytorch

I want to implement character-level embedding.

This is usual word embedding.

Word Embedding

Input: [ [‘who’, ‘is’, ‘this’] ] 
-> [ [3, 8, 2] ]     # (batch_size, sentence_len)
-> // Embedding(Input)
 # (batch_size, seq_len, embedding_dim)

This is what i want to do.

Character Embedding

Input: [ [ [‘w’, ‘h’, ‘o’, 0], [‘i’, ‘s’, 0, 0], [‘t’, ‘h’, ‘i’, ‘s’] ] ]
-> [ [ [2, 3, 9, 0], [ 11, 4, 0, 0], [21, 10, 8, 9] ] ]      # (batch_size, sentence_len, word_len)
-> // Embedding(Input) # (batch_size, sentence_len, word_len, embedding_dim)
-> // sum each character embeddings  # (batch_size, sentence_len, embedding_dim)
The final output shape is same as Word embedding. Because I want to concat them later.

Although I tried it, I am not sure how to implement 3-D embedding. Do you know how to implement such a data?

def forward(self, x):
    print('x', x.size()) # (N, seq_len, word_len)
    bs = x.size(0)
    seq_len = x.size(1)
    word_len = x.size(2)
    embd_list = []
    for i, elm in enumerate(x):
        tmp = torch.zeros(1, word_len, self.embd_size)
        for chars in elm:
            tmp = torch.add(tmp, 1.0, self.embedding(chars.unsqueeze(0)))

Above code got an error because output of self.embedding is Variable.

TypeError: torch.add received an invalid combination of arguments - got (torch.FloatTensor, float, Variable), but expected one of:
 * (torch.FloatTensor source, float value)
 * (torch.FloatTensor source, torch.FloatTensor other)
 * (torch.FloatTensor source, torch.SparseFloatTensor other)
 * (torch.FloatTensor source, float value, torch.FloatTensor other)
      didn't match because some of the arguments have invalid types: (torch.FloatTensor, float, Variable)
 * (torch.FloatTensor source, float value, torch.SparseFloatTensor other)
      didn't match because some of the arguments have invalid types: (torch.FloatTensor, float, Variable)

Update

I could do this. But for is not effective for batch. Do you guys know more efficient way?

def forward(self, x):
    print('x', x.size()) # (N, seq_len, word_len)
    bs = x.size(0)
    seq_len = x.size(1)
    word_len = x.size(2)
    embd = Variable(torch.zeros(bs, seq_len, self.embd_size))
    for i, elm in enumerate(x): # every sample
        for j, chars in enumerate(elm): # every sentence. [ [‘w’, ‘h’, ‘o’, 0], [‘i’, ‘s’, 0, 0], [‘t’, ‘h’, ‘i’, ‘s’] ]
            chars_embd = self.embedding(chars.unsqueeze(0)) # (N, word_len, embd_size) [‘w’,‘h’,‘o’,0]
            chars_embd = torch.sum(chars_embd, 1) # (N, embd_size). sum each char's embedding
            embd[i,j] = chars_embd[0] # set char_embd as word-like embedding

    x = embd # (N, seq_len, embd_dim)

Update2

This is my final code. Thank you, Wasi Ahmad!

def forward(self, x):
    # x: (N, seq_len, word_len)
    input_shape = x.size()
    bs = x.size(0)
    seq_len = x.size(1)
    word_len = x.size(2)
    x = x.view(-1, word_len) # (N*seq_len, word_len)
    x = self.embedding(x) # (N*seq_len, word_len, embd_size)
    x = x.view(*input_shape, -1) # (N, seq_len, word_len, embd_size)
    x = x.sum(2) # (N, seq_len, embd_size)

    return x


Solution 1:[1]

I am assuming you have a 3d tensor of shape BxSxW where:

B = Batch size
S = Sentence length
W = Word length

And you have declared embedding layer as follows.

self.embedding = nn.Embedding(dict_size, emsize)

Where:

dict_size = No. of unique characters in the training corpus
emsize = Expected size of embeddings

So, now you need to convert the 3d tensor of shape BxSxW to a 2d tensor of shape BSxW and give it to the embedding layer.

emb = self.embedding(input_rep.view(-1, input_rep.size(2)))

The shape of emb will be BSxWxE where E is the embedding size. You can convert the resulting 3d tensor to a 4d tensor as follows.

emb = emb.view(*input_rep.size(), -1)

The final shape of emb will be BxSxWxE which is what you are expecting.

Solution 2:[2]

What you are looking for is implemented in allennlp TimeDistributed layer

Here is a demonstration:

from allennlp.modules.time_distributed import TimeDistributed
batch_size = 16
sent_len = 30
word_len = 5

Consider a sentence in input:

sentence = torch.randn(batch_size, sent_len, word_len) # suppose is your data

Define a char embedding layer (suppose you have also the input padded):

char_embedding = torch.nn.Embedding(char_vocab_size, char_emd_dim, padding_idx=char_pad_idx)

Wrap it!

embedding_sentence = TimeDistributed(char_embedding)(sentence) # shape: batch_size, sent_len, word_len, char_emb_dim 

embedding_sentence has shape batch_size, sent_len, word_len, char_emb_dim

Actually, you can easily redefine a module in PyTorch to do this.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Wasi Ahmad
Solution 2 Alessandra