'Why is my input and output shape from keras conv2d the same dimensions?
I'm trying to rebuild someone else's network with this shape:
My (image) data going into the network has this shape:
print(X_train[0].shape)
print(len(X_train))
print(len(y_train))
(150,150,3)
2160
2160
I can write and get a neural network to run no problem:
model = Sequential()
model.add(Input(shape=(150,150,3)))
model.add(Conv2D(32, kernel_size=3,strides=(1, 1),activation='relu', padding='same', dilation_rate=1))
model.add(MaxPooling2D(pool_size=(2, 2)))
But then when I view the plot, it looks like this:
Can someone explain to me why my output of the Conv2D layer does not decrease from 150 to 148, as expected? (Presumably then, the 'wrong' numbers in the max_pooling layers are a consequence of this, so I only need to focus on understanding the discrepancy in the Conv2D layer).
Solution 1:[1]
You use padding='same so you dont "loose" any values on the side
This has a good gif on different padding strategies
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |


