'convolute pre-trained network over a bigger image with keras/tensorflow
I want to define a tensorflow convolutional model by including a pretrained model inside it.
I have a pretreained sequential model defined as:
kernel_size = 11
def my_base_model():
model = keras.Sequential([
layers.Flatten(input_shape=[kernel_size, kernel_size])
layers.Dense(32, activation='relu', name="layer1"),
layers.Dense(1, name="layer2", activation="sigmoid")
])
# ...
return model
model = my_base_model()
model.fit(...)
Now, I want to use this network as a filter for slide arbitrary-size images:
I order to make it easier/streamline, I think in building another CNN network. For example, supposing the input image shape is (640, 640). In order to achieve a (640 - 11 + 1, 640 - 11 + 1)` output image:
def my_cnn_model():
model = keras.Sequential([
layers.Conv2D(1, (kernel_size, kernel_size), activation='linear', input_shape=(640, 640, 1)),
# <== including base model here
])
return model
Obviously, I cannot just append my_base_model there because I want to apply the model for each 11x11 square.
Is it possible? If not, is there another way to implement it instead of directly sliding the 640x640 with 2 nested for loops?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|

