'Adding New Vocabulary Tokens to the Models and saving it for downstream model

Is the mean initialisation of new tokens correct? Also how should I save new tokenizer( after adding new tokens to it) to use it in downstream model?

I train a MLM model by adding new tokens and taking mean. How should I use the fine tuned MLM model for new classification task?

tokenizer_org = tr.BertTokenizer.from_pretrained("/home/pc/bert_base_multilingual_uncased")
tokenizer.add_tokens(joined_keywords)
model = tr.BertForMaskedLM.from_pretrained("/home/pc/bert_base_multilingual_uncased", return_dict=True)

# prepare input
text = ["Replace me by any text you'd like"]
encoded_input = tokenizer(text, truncation=True, padding=True, max_length=512, return_tensors="pt")
print(encoded_input)


# add embedding params for new vocab words
model.resize_token_embeddings(len(tokenizer))
weights = model.bert.embeddings.word_embeddings.weight
    
# initialize new embedding weights as mean of original tokens
with torch.no_grad():
    emb = []
    for i in range(len(joined_keywords)):
        word = joined_keywords[i]
        # first & last tokens are just string start/end; don't keep
        tok_ids = tokenizer_org(word)["input_ids"][1:-1]
        tok_weights = weights[tok_ids]

        # average over tokens in original tokenization
        weight_mean = torch.mean(tok_weights, axis=0)
        emb.append(weight_mean)
    weights[-len(joined_keywords):,:] = torch.vstack(emb).requires_grad_()

model.to(device)

trainer.save_model("/home/pc/Bert_multilingual_exp_TCM/model_mlm_exp1")

It saves model, config, training_args. How to save the new tokenizer as well??



Solution 1:[1]

What you are going to do is a convenient method for adding new markers and information to raw text. huggingface provided several method to do that I used the simplest one IMO.

BASE_MODEL = "distilbert-base-multilingual-cased"
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
print('Vocab size before manipulation: ', len(tokenizer))
special_tokens_dict = {'additional_special_tokens': ['[C1]','[C2]','[C3]','[C4]']}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print('Vocab size after manipulation: ', len(tokenizer))
tokenizer.save_pretrained("./models/tokenizer/")
tokenizer2 = AutoTokenizer.from_pretrained("./models/tokenizer/")
print('Vocab size after saving and loading: ', len(tokenizer)) 

output:

Vocab size before manipulation:  119547
Vocab size after manipulation:  119551
Vocab size after saving and loading:  119551

The big caveat : When you manipulated the tokenizer you need to update the embedding layer of the model accordingly. Some thing like this model.resize_token_embeddings(len(tokenizer)).

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 meti