'How to penalize False Negatives more than False Positives in PyTorch BCEWithLogitsLoss
I have a dataset containing text sequences with corresponding labels (0 or 1). The issue is that the dataset has approx. 20x more sequences with label 0 than with label 1. I encountered a method to improve the FNR, namely to weigh FNs heavier than FPs in the loss function. However, with the loss function I use (PyTorch BCEWithLogitsLoss) this isn't an option as far as I could see. Is there a way to implement this?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
