I was doing sentence pair classification using BERT. At first, I encode the sentence pair as train_encode = tokenizer(train1, train2,padding="max_length",trunca
Working fine for months, then I interrupted a "bert-large-cased" download and the following code returns the error in the title: from transformers import BertMo
My code: model = SentenceTransformer('hiiamsid/sentence_similarity_spanish_es') I apply the model to the text column of the data frame prueba['encoder'] = prueb
I am new to deep learning and have come across BERT. I tried small_bert/bert_en_uncased_L-4_H-512_A-8 as a Tensorflow tutorial did, and the result was quite ama
I am trying to use nlpaug to swap some words out but am having issue with it replacing tokens permanently with the [UNK] token. I am using the docs here: https:
When I try the huggingface models and it gives the following error message: from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pre
I'm trying to fine-tune BERT model for sentiment analysis (classifying text as positive/negative) with Huggingface Trainer API. My dataset has two columns, Text
I have been using bert and trying to compile the model using the below line of code. model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased'
I was trying to create a custom NER model. I used spacy library to create the model. And this line of code is to create the config file from the base.config fil
I am building an address matching algorithm. The main problem is that previous models like Conditional Random fields (CRF)from Paserator and Averaged Perceptron
I am using sentiment-analysis pipeline as described here. from transformers import pipeline classifier = pipeline('sentiment-analysis') It's failing with a con
I am trying to replicates the code from this page. At my workplace we have access to transformers and pytorch library but cannot connect to internet from our py
i find a answer of training model from scratch in this question: How to train BERT from scratch on a new domain for both MLM and NSP? one answer use Trainer and
I am fine tuning a BERT model for a multiclass classification task. My problem is that I don't know how to add "early stopping" to those Trainer instances. Any
I have some custom data I want to use to further pre-train the BERT model. I’ve tried the two following approaches so far: Starting with a pre-trained BER
I am using biobert-embeddings==0.1.2 and torch==1.2.0 versions to embed some documents. But, I get the following error when I try to load the model by from biob
I've problems integrating Bert Embedding Layer in a BiLSTM model for text classification task. My dataset is in the form where each row has 2 columns: text and
So I have a problem when train deep learning with BERT with tensorflow which contain text dataset. So i want to fit() the model but got an error when training.
I am confused with these two structures. In theory, the output of them are all connected to their input. what magic make 'self-attention mechanism' is more powe
I was curious if it is possible to use transfer learning in text generation, and re-train/pre-train it on a specific kind of text. For example, having a pre