Category "loss-function"

Training loss for Faster-RCNN either becoming Nan or infinity

I want to implement Pytorch Faster-RCNN module on a custom dataset that I curated and labelled. The implementation detail looks straightforward, there was a dem

RuntimeError: 1D target tensor expected, multi-target not supported Python: NumPy

I am dealing with a CNN and I get the following error on the line loss = criterion(outputs, data_y): Here is the relevant code snippet: def run(model, X_train,

NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array

I try to pass 2 loss functions to a model as Keras allows that. loss: String (name of objective function) or objective function or Loss instance. See losses. I

Calculate loss in Keras without running the model

Is there a way to get the loss of the model, with it's current weights, without running evaluate, or fit, on it? model = keras.Sequential([ keras.layers.In

having a very large loss when I am training a regression loss

I want to predict the center of the pupil from an image. so I used a CNN with 3 Dence layer. so the input is an image and the output is a coordinate (X,Y). my m

Fine-Tuning DistilBertForSequenceClassification: Is not learning, why is loss not changing? Weights not updated?

I am relatively new to PyTorch and Huggingface-transformers and experimented with DistillBertForSequenceClassification on this Kaggle-Dataset. from transformers