'How to handle LemmatizerTrainer 'UTFDataFormatException: encoded string too long'?
I am using Opennlp to train a model for lemmatization of german words. Therefore I use the opennlp cli and the training set of UD_German-HDT which can be downloaded here
The training itself works fine (just need a little bit of ram) but the cli fails to write the model because of an UTFDataFormatException: encoded string too long exception.
The cli command I am using: opennlp LemmatizerTrainerME.conllu -params params.txt -lang de -model de-lemmatizer.bin -data UD_German-HDT/de_hdt-ud-train.conllu -encoding UTF-8
Stacktrace:
Writing lemmatizer model ... failed
Error during writing model file 'de-lemmatizer.bin'
encoded string too long: 383769 bytes
java.io.UTFDataFormatException: encoded string too long: 383769 bytes
at java.base/java.io.DataOutputStream.writeUTF(DataOutputStream.java:364)
at java.base/java.io.DataOutputStream.writeUTF(DataOutputStream.java:323)
at opennlp.tools.ml.maxent.io.BinaryGISModelWriter.writeUTF(BinaryGISModelWriter.java:71)
at opennlp.tools.ml.maxent.io.GISModelWriter.persist(GISModelWriter.java:97)
at opennlp.tools.ml.model.GenericModelWriter.persist(GenericModelWriter.java:75)
at opennlp.tools.util.model.ModelUtil.writeModel(ModelUtil.java:71)
at opennlp.tools.util.model.GenericModelSerializer.serialize(GenericModelSerializer.java:36)
at opennlp.tools.util.model.GenericModelSerializer.serialize(GenericModelSerializer.java:29)
at opennlp.tools.util.model.BaseModel.serialize(BaseModel.java:597)
at opennlp.tools.cmdline.CmdLineUtil.writeModel(CmdLineUtil.java:182)
at opennlp.tools.cmdline.lemmatizer.LemmatizerTrainerTool.run(LemmatizerTrainerTool.java:77)
at opennlp.tools.cmdline.CLI.main(CLI.java:256)
Has somebody encountered this problem and has a solution?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
