'Save and load nlp results in spacy

I want to use SpaCy to analyze many small texts and I want to store the nlp results for further use to save processing time. I found code at Storing and Loading spaCy Documents Containing Word Vectors but I get an error and I cannot find how to fix it. I am fairly new to python.

In the following code, I store the nlp results to a file and try to read it again. I can write the first file but I do not find the second file (vocab). I also get two errors: that Doc and Vocab are not defined.

Any idea to fix this or another method to achieve the same result is more than welcomed.

Thanks!

import spacy
nlp = spacy.load('en_core_web_md')
doc = nlp("He eats a green apple")
for token in doc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
            token.shape_, token.is_alpha, token.is_stop)

NLP_FName = "E:\\SaveTest.nlp"
doc.to_disk(NLP_FName)
Vocab_FName = "E:\\SaveTest.voc"
doc.vocab.to_disk(Vocab_FName)

#To read the data again:
idoc = Doc(Vocab()).from_disk(NLP_FName)
idoc.vocab.from_disk(Vocab_FName)

for token in idoc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
            token.shape_, token.is_alpha, token.is_stop)


Solution 1:[1]

I tried your code and I had a few minor issues wgich I fixed on the code below.

Note that SaveTest.nlp is a binary file with your doc info and
SaveTest.voc is a folder with all the spacy model vocab information (vectors, strings among other).

Changes I made:

  1. Import Doc class from spacy.tokens
  2. Import Vocab class from spacy.vocab
  3. Download en_core_web_md model using the following command:
python -m spacy download en_core_web_md

Please note that spacy has multiple models for each language, and usually you have to download it first (typically sm, md and lg models). Read more about it here.

Code:

import spacy
from spacy.tokens import Doc
from spacy.vocab import Vocab

nlp = spacy.load('en_core_web_md')
doc = nlp("He eats a green apple")
for token in doc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
          token.shape_, token.is_alpha, token.is_stop)

NLP_FName = "E:\\SaveTest.nlp"
doc.to_disk(NLP_FName)
Vocab_FName = "E:\\SaveTest.voc"
doc.vocab.to_disk(Vocab_FName)

#To read the data again:
idoc = Doc(Vocab()).from_disk(NLP_FName)
idoc.vocab.from_disk(Vocab_FName)

for token in idoc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
          token.shape_, token.is_alpha, token.is_stop)

Let me know if this is helpful to you, and if not, please add your error message to your original question so I can help.

Solution 2:[2]

The efficient way to do this is to use a DocBin instead: https://spacy.io/usage/saving-loading#docs

Example adapted from the docs (you can use doc_bin.to/from_disk instead of to/from_bytes):

import spacy
from spacy.tokens import DocBin

doc_bin = DocBin()
texts = ["Some text", "Lots of texts...", "..."]
nlp = spacy.load("en_core_web_sm")
for doc in nlp.pipe(texts):
    doc_bin.add(doc)

bytes_data = doc_bin.to_bytes()

# Deserialize later, e.g. in a new process
nlp = spacy.blank("en")
doc_bin = DocBin().from_bytes(bytes_data)
docs = list(doc_bin.get_docs(nlp.vocab))

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 aab