'input_shapeshape in Keras

I have question about the input_shape used in Sequential (Keras).

The shape of my training set is: x_train.shape=(97.167)

def build_model():
  model = models.Sequential()
  model.add(layers.Dense(65, activation='relu', input_shape=(x_train.shape[1],)))
  model.add(layers.Dense(1))
  model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
  return model
  1. I don't have the same result if I use input_shape=(x_train.shape[1],) et input_shape=(x_train.shape[1],1)??
  2. input_shape=(x_train.shape[1],1) = (167,1)
  3. How can I guess the input_shape of model?


Solution 1:[1]

In Keras, each type of layer requires a certain number of dimensions in the input :

The Dense layers require inputs as (batch_size, input_size) or (batch_size, optional,...,optional, input_size). Here the input_size is equal to the number of features in your data. In your case it is 167. Hence the input_shape = (None,167,).

The 2D convolutional layers need inputs as:

  • if using channels_last: (batch_size, imageside1, imageside2, channels)

  • if using channels_first: (batch_size, channels, imageside1, imageside2)

The 1D convolutions and recurrent layers use (batch_size, sequence_length, features).

Using input_shape=(x_train.shape[1],1) = (167,1), adds one more dimension to the data and hence both the results are different.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Tfer3