Category "huggingface-transformers"

Deploying Huggingface model for inference - pytorch-scatter issues

It's my first time with SageMaker, and I'm having issues when trying to execute this script I took from this Huggingface model (deploy tab) from sagemaker.huggi

How to specify a proxy in transformers pipeline

I am using sentiment-analysis pipeline as described here. from transformers import pipeline classifier = pipeline('sentiment-analysis') It's failing with a con

How to load custom dataset from CSV in Huggingfaces

I would like to load a custom dataset from csv using huggingfaces-transformers

how to train a bert model from scratch with huggingface?

i find a answer of training model from scratch in this question: How to train BERT from scratch on a new domain for both MLM and NSP? one answer use Trainer and

Early stopping in Bert Trainer instances

I am fine tuning a BERT model for a multiclass classification task. My problem is that I don't know how to add "early stopping" to those Trainer instances. Any

Import of transformers package throwing value_error

I have successfully installed transformers package in my Jupyter Notebook from Anaconda administrator console using the command 'conda install -c conda-forge tr

Continual pre-training vs. Fine-tuning a language model with MLM

I have some custom data I want to use to further pre-train the BERT model. I’ve tried the two following approaches so far: Starting with a pre-trained BER

Hugginface transformers module not recognized by anaconda

I am using Anaconda, python 3.7, windows 10. I tried to install transformers by https://huggingface.co/transformers/ on my env. I am aware that I must have eith

hugging face transformers not downloading based on requirements list/ pip freeze

a pip freeze yields the following for hugging face transformers: git+https://github.com/huggingface/transformers.git@8ddbfe975264a94f124684a138a2a5ca89a2bd0d

T5Tokenizer requires the SentencePiece library but it was not found in your environment

I am trying to explore T5 this is the code !pip install transformers from transformers import T5Tokenizer, T5ForConditionalGeneration qa_input = """question: Wh

How to output the list of probabilities on each token via model.generate?

Right now I have: model = GPTNeoForCausalLM.from_pretrained(model_name) tokenizer = GPT2Tokenizer.from_pretrained(model_name) input_ids = tokenizer(prompt, retu

Use of pytorch dataset for model inference- GPU

I am running T5-base-grammar-correction for grammer correction on my dataframe with text column from happytransformer import HappyTextToText from happytransform

Huggingface distilbert-base-uncased-finetuned-sst-2-english runs out of ram with only a few kb?

My dataset is only 10 thousand sentences. I run it in batches of 100, and clear the memory on each run. I manually slice the sentences to only 50 characters. Af

Pretraining a language model on a small custom corpus

I was curious if it is possible to use transfer learning in text generation, and re-train/pre-train it on a specific kind of text. For example, having a pre

Asking gpt-2 to finish sentence with huggingface transformers

I am currently generating text from left context using the example script run_generation.py of the huggingface transformers library with gpt-2: $ python transf

How to apply max_length to truncate the token sequence from the left in a HuggingFace tokenizer?

In the HuggingFace tokenizer, applying the max_length argument specifies the length of the tokenized text. I believe it truncates the sequence to max_length-2 (

How to train a model in SageMaker Studio with .train and .test extension dataset files?

I'm trying to implement ML models with Amazon SageMaker Studio, the thing is that the model that I want to implement is from hugging face and It uses a Dataset

AttributeError: 'GPT2TokenizerFast' object has no attribute 'max_len'

I am just using the huggingface transformer library and get the following message when running run_lm_finetuning.py: AttributeError: 'GPT2TokenizerFast' object

huggingface transformers convert logit scores to probability

I'm a beginner to this field and am stuck. I am following this tutorial (https://towardsdatascience.com/multi-label-multi-class-text-classification-with-bert-tr

Fine-Tuning DistilBertForSequenceClassification: Is not learning, why is loss not changing? Weights not updated?

I am relatively new to PyTorch and Huggingface-transformers and experimented with DistillBertForSequenceClassification on this Kaggle-Dataset. from transformers