'Define flatten layer in neural network using pytorch

I'm trying to define a flatten layer before initiating fully connected layer. As my input is a tensor with shape (512, 2, 2), so I want to flatten this tensor before FC layers.

I used to get this error:

empty(): argument 'size' must be tuple of ints, but found element of type Flatten at pos 2
import torch.nn as nn
class Network(nn.Module):
    def __init__(self):
        super(Network,self).__init__()
        self.flatten=nn.Flatten()
        self.fc1=nn.Linear(self.flatten,512)
        self.fc2=nn.Linear(512,256)
        self.fc3=nn.Linear(256,3)
 
        
    def forward(self,x):
        x=self.flatten(x) # Flatten layer
        x=torch.ReLU(self.fc1(x))  
        x=torch.ReLU(self.fc2(x))
        x=torch.softmax(self.fc3(x))
        return x


Solution 1:[1]

This line is not correct:

        self.fc1 = nn.Linear(self.flatten, 512)

the first argument in_features for nn.Linear should be int not the nn.Module

in your case you defined flatten attribute as a nn.Flatten module:

        self.flatten = nn.Flatten()

to fix this issue, you have to pass in_features equals to the number of feature after flattening:

        self.fc1 = nn.Linear(n_features_after_flatten, 512)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 trsvchn