'Why spacy morphologizer doesn't work when we use a custom tokenizer?

I don't understand why when i'm doing this

import spacy
from copy import deepcopy
nlp = spacy.load("fr_core_news_lg")

class MyTokenizer:
    def __init__(self, tokenizer):
        self.tokenizer = deepcopy(tokenizer)
    def __call__(self, text):
        return self.tokenizer(text)

nlp.tokenizer = MyTokenizer(nlp.tokenizer)
doc = nlp("Un texte en français.")

Tokens don't have any morph assigned

print([tok.morph for tok in doc])
> ['','','','','']

Is this behavior expected? If yes, why ? (spacy v3.0.7)



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source