'Using ONNX converted model
Guys I'm working on a Sentiment Analysis project and I converted a BERT model to an ONNX model because the original model had a massive runtime when I wanted to give it a huge data to predict. But now I don't know how to use this ONNX model. I will paste my original code when I was running the model in a normal way.
BTW if anyone has any suggestion to optimize this peace of code or the model without any need to using ONNX or Openvino, I will appreciate it.
Link to model Hugging Face website
cosining = spatial.distance.cosine
X_pos_test = model.encode(pos_test)
X_neg_test = model.encode(neg_test)
sentence_tokenize = hazm.sent_tokenize # It's a tokenizer to finding Farsi sentences in texts
def predicting(string, api_value, model):
cs = {'Sentence': [], 'Negative Score': [], 'Positive Score': [] }
for i in range(len(api_value)):
api_value[i] = cleaning_text(api_value[i]) # Normalizing texts
sentence_tokenized = np.array(sentence_tokenize(api_value[i]))
for sentence in range(len(sentence_tokenized)):
encoded_sentence_tokenized = model.encode(sentence_tokenized[sentence])
neg_result = 1 - cosining(X_neg_test, encoded_sentence_tokenized) # for negative
pos_result = 1 - cosining(X_pos_test, encoded_sentence_tokenized) # for positive
cs['Sentence'].append(sentence_tokenized[sentence])
cs['Negative Score'].append(neg_result)
cs['Positive Score'].append(pos_result)
cs_finall = pd.DataFrame(cs)
cs_finall.to_excel(string + " score.xlsx", index=False)
return cs_finall
Solution 1:[1]
I found the answer in this article, it's from the huggingface website. I hope it could help someone else
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Ali Bahadorani |
