''ValueError: Dimensions must be equal, but are 252 and 256'
so I am trying to implement this ML algorithm (standard autoencoder) that simply reconstructs the image input fed in.
The images have been resized to (256,256) and the x and y data have been split into train and test such that train has a shape (35, 256, 256, 3) and test has a shape (12, 256, 256, 3).
My model is as follows:
input_layer = Input(shape=(256,256, 3))
# encoding architecture
encoded_layer1 = Conv2D(64, (3, 3), activation='relu', padding='same')(input_layer)
encoded_layer1 = MaxPool2D( (2, 2), padding='same')(encoded_layer1)
encoded_layer2 = Conv2D(32, (3, 3), activation='relu', padding='same')(encoded_layer1)
encoded_layer2 = MaxPool2D( (2, 2), padding='same')(encoded_layer2)
encoded_layer3 = Conv2D(16, (3, 3), activation='relu', padding='same')(encoded_layer2)
latent_view = MaxPool2D( (2, 2), padding='same')(encoded_layer3)
# decoding architecture
decoded_layer1 = Conv2D(16, (3, 3), activation='relu', padding='same')(latent_view)
decoded_layer1 = UpSampling2D((2, 2))(decoded_layer1)
decoded_layer2 = Conv2D(32, (3, 3), activation='relu', padding='same')(decoded_layer1)
decoded_layer2 = UpSampling2D((2, 2))(decoded_layer2)
decoded_layer3 = Conv2D(64, (3, 3), activation='relu')(decoded_layer2)
decoded_layer3 = UpSampling2D((2, 2))(decoded_layer3)
output_layer = Conv2D(1, (3, 3), padding='same')(decoded_layer3)
Input model:
model = Model(input_layer, output_layer)
model.compile(optimizer='adam', loss='mse')
model.summary()
Model Parms:
Layer (type) Output Shape Param #
=================================================================
input_22 (InputLayer) [(None, 256, 256, 3)] 0
_________________________________________________________________
conv2d_141 (Conv2D) (None, 256, 256, 64) 1792
_________________________________________________________________
max_pooling2d_60 (MaxPooling (None, 128, 128, 64) 0
_________________________________________________________________
conv2d_142 (Conv2D) (None, 128, 128, 32) 18464
_________________________________________________________________
max_pooling2d_61 (MaxPooling (None, 64, 64, 32) 0
_________________________________________________________________
conv2d_143 (Conv2D) (None, 64, 64, 16) 4624
_________________________________________________________________
max_pooling2d_62 (MaxPooling (None, 32, 32, 16) 0
_________________________________________________________________
conv2d_144 (Conv2D) (None, 32, 32, 16) 2320
_________________________________________________________________
up_sampling2d_60 (UpSampling (None, 64, 64, 16) 0
_________________________________________________________________
conv2d_145 (Conv2D) (None, 64, 64, 32) 4640
_________________________________________________________________
up_sampling2d_61 (UpSampling (None, 128, 128, 32) 0
_________________________________________________________________
conv2d_146 (Conv2D) (None, 126, 126, 64) 18496
_________________________________________________________________
up_sampling2d_62 (UpSampling (None, 252, 252, 64) 0
_________________________________________________________________
conv2d_147 (Conv2D) (None, 252, 252, 1) 577
=================================================================
Total params: 50,913
Trainable params: 50,913
Non-trainable params: 0
And:
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=5, mode='auto')
history = model.fit(x_train, y_train, epochs=20, batch_size=2048, validation_data=(x_test,y_test), callbacks=[early_stopping])
After doing this I get the error:
ValueError: Dimensions must be equal, but are 252 and 256 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](model_23/conv2d_147/BiasAdd, IteratorGetNext:1)' with input shapes: [?,252,252,1], [?,256,256,3].
I do not understand why my dimensions suddenly goes to 252?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
