'Problem using torch.nn.CosineEmbeddingLoss()
I'm trying to use torch.nn.CosineEmbeddingLoss to evaluate the cosine distance between two tensors as explained in this blog post https://medium.com/p/53eefdfdbcc7 .
The author claims that it can be used in the following way:
loss_function = torch.nn.CosineEmbeddingLoss(reduction='none')
# . . . Then during training . . .
loss = loss_function(reconstructed, input_data).sum()
loss.backward()
but when I try to evaluate this in my case,
nn.CosineEmbeddingLoss(reduction='none')(vec1,vec2).sum()
I get
TypeError: forward() missing 1 required positional argument: 'target'
What target should I specify here? I just want to evaluate the distance between two vectors.
Solution 1:[1]
Please read carefully the doc for the loss function you want to use nn.CosineEmbeddingLoss: the function does more than just compute the cosine distance between two vectors. It also treats the distance differently if their target is 1 or -1.
I think you are confusing nn.CosineEmbeddingLoss with nn.CosineSimilarity.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Shai |
