'Understanding output shape for 1D convolution

I'm trying to get my head around 1D convolution - specifically, how the padding comes into it.

Suppose I have an input sequence of shape (batch,128,1) and run it through the following Keras layer:

tf.keras.layers.Conv1D(32, 5, strides=2, padding="same")

I get an output of shape (batch,64,32), but I don't understand why the sequence length has reduced from 128 to 64... I thought the padding="same" parameter kept the output length the same as the input? I suppose that's only true if strides=1; so in this case I'm confused about what padding="same" actually means.



Solution 1:[1]

According to the TensorFlow documents in your case we have:

  • filters (Number of filters - output dimension) = 32
  • kernelSize (The filter size) = 5
  • strides (The unit to move in input data by the convolution layer in each dimensions after applying each convolution) = 2

So applying input in shape (batch, 128, 1) will be to apply 32 kernels (in shape 5) and jump two unit after each convolution - so we have 128 / 2 = 64 value corresponding to each filter and at the end output would be in shape (batch, 64, 32).

padding="same" is just determining the the convolution on borders. For more details you can check here.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Sadra Sabouri