'How do I model CNN that splits into 4 different GMP,GAP layers

I am trying to recreate these four CNN models.

I don't understand what is happening at the layer where it splits off into 4 layers. That is what the author refers to as 3rd part. I am not sure what is happening when it is concatenated together either. I am trying to model this using PyTorch.

The author describes the model in the following two ways.

In CNNs, one of the most important concepts is the hierarchical convolutional data processing, which was introduced by Kunihiko Fukushima [61] and then developed by Lecun et al. in the LeNet- 5 architecture [62]. For the development of the CNN, the archi- tectures are getting deeper. Because the information is abstracted layer by layer, researchers tend to use smaller convolutional filters, such as 1 × 1 and 3 × 3, and more stacked layers rather than bigger filters taking into account the computational costs and learning ca- pacity of the architecture. Thus, in the first and third parts of each architecture, the size of convolutional kernels was set to 3 × 3.

Also,

The third parts of the four architectures are the same. This part consists of two global-max-pooling (GMP) layers and two global- average-pooling (GAP) layers following three convolution layers and a max-pooling layer.

Here is the link to the original paper. I suspect it is a hierarchical model, but I am not sure how I would implement this. Would I use pyTorch.nn parallel container? I can't find documentation on this here, but this seems relevant.

I have this so far.

class first_model(nn.Module):
    def __init__(self):
        super(first_model, self).__init__()
        self.stack = nn.Sequential(
            nn.Conv3d(in_channels=4, out_channels=16, kernel_size=(3,3)),
            nn.MaxPool3d(kernel_size=(3,3)),
            nn.Dropout()
            nn.Conv3d(in_channels=16, out_channels=32, kernel_size=(3,3)),
            nn.Conv3d(in_channels=32, out_channels=64, kernel_size=(1,1),
            nn.Dropout(),
            nn.Conv3d(in_channels=64, out_channels=128, kernel_size=(3,3),
        )

    def forward(self, x):
        return 


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source