'Pytorch - Inferring linear layer in_features
I am building a toy model to take in some images and give me a classification. My model looks like:
conv2d -> pool -> conv2d -> linear -> linear.
My issue is that when we create the model, we have to calculate the size of the first linear layer in_features based on the size of the input image. If we get new images of different sizes, we have to recalculate in_features for our linear layer. Why do we have to do this? Can't it just be inferred?
Solution 1:[1]
oh,I also meet the situation and there is my solution
output3=torch.flatten(output1) # flatten the tensor
output3_1=output3.clone().detach()
#create a new tensor,whos require_graid=Flase
output4=output3_1.numpy()ndarry #turn the tensor to ndarry
self.size=np.size(output4)
#get the size of the ndarry self.size will be a int calss
now you get the value of in_feature
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
