'ValueError: Layer "model" expects 4 input(s), but it received 1 input tensors

I have been trying to create a multi-input CNN for image feature extraction. It needs to get 4 images as inputs, but something is not working right. Since the dataset itself is big, I've made a custom sequence which should load images 4 by 4, otherwise they don't fit into memory. The code used so far has literally been copied from an article that explained 2-input-CNN, just to see if it actually manages to get to the training process, and it has been modified to accept 4 images instead of 2. I must state that I do not have experience in deep learning, but I have some previous ML experience.

The problem is that I consistently get this error:

ValueError: Layer "model" expects 4 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=float32>]

I have tried getting a batch from custom sequence to see if the shapes are correct and they are. The outputed shape of X batch is (4,640,640,3), and y is (49,) which I think should be fine, so now I have no idea what could be wrong. Any help would be much appreciated!

EDIT: I have tried running it on Google Colab, if that makes any difference.

CNN+training code:

input_shape=(640,640,3)
num_classes = 49

def create_convolution_layers(input_img):
    model = tf.keras.layers.Conv2D(32, (3, 3), padding='same', input_shape=input_shape)(input_img)
    model = tf.keras.layers.LeakyReLU(alpha=0.1)(model)
    model = tf.keras.layers.MaxPooling2D((2, 2),padding='same')(model)
    model = tf.keras.layers.Dropout(0.25)(model)

    model = tf.keras.layers.Conv2D(64, (3, 3), padding='same')(model)
    model = tf.keras.layers.LeakyReLU(alpha=0.1)(model)
    model = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),padding='same')(model)
    model = tf.keras.layers.Dropout(0.25)(model)
  
    model = tf.keras.layers.Conv2D(128, (3, 3), padding='same')(model)
    model = tf.keras.layers.LeakyReLU(alpha=0.1)(model)
    model = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),padding='same')(model)
    model = tf.keras.layers.Dropout(0.4)(model)
  
    return model

input_0 = tf.keras.layers.Input(shape=input_shape)
model_0 = create_convolution_layers(input_0)

input_1 = tf.keras.layers.Input(shape=input_shape)
model_1 = create_convolution_layers(input_1)

input_2 = tf.keras.layers.Input(shape=input_shape)
model_2 = create_convolution_layers(input_2)

input_3 = tf.keras.layers.Input(shape=input_shape)
model_3 = create_convolution_layers(input_3)

conv = tf.keras.layers.concatenate([model_0, model_1, model_2, model_3])

conv = tf.keras.layers.Flatten()(conv)

dense = tf.keras.layers.Dense(128)(conv)
dense = tf.keras.layers.LeakyReLU(alpha=0.1)(dense)
dense = tf.keras.layers.Dropout(0.5)(dense)

output = tf.keras.layers.Dense(num_classes, activation='softmax')(dense)

model = tf.keras.models.Model(inputs=[input_0, input_1, input_2, input_3], outputs=[output])

opt = tf.keras.optimizers.Adam()

model.compile(loss='categorical_crossentropy',
          optimizer=opt,
          metrics=['accuracy'])

#model.summary()

training_generator = CustomSeq(X_train, y_train, 1)

model.fit(training_generator, 
                steps_per_epoch = len(X_train),
                shuffle=False,
                initial_epoch=0,
                use_multiprocessing=False,
                max_queue_size=10,
                workers=1,
                epochs=1, 
                verbose=1)

Custom sequence code:

import tensorflow as tf
import numpy as np
import math
import pandas as pd
import os
from random import shuffle

class CustomSeq(tf.keras.utils.Sequence):

    def __init__(self, x_set, y_set, batch_size):
        self.x, self.y = x_set, y_set
        self.batch_size = batch_size
    

    def __len__(self):
        return math.ceil(len(self.x) / 1)

    def __get_output(self, label, num_classes):
        return tf.keras.utils.to_categorical(label, num_classes=num_classes)

    def __get_input(self, path):

        image = tf.keras.preprocessing.image.load_img(path)
        image_arr = tf.keras.preprocessing.image.img_to_array(image)

        return image_arr/255.

    def __getitem__(self, idx):
        batch_x = self.x[idx * self.batch_size:(idx + 1) *
        self.batch_size][0]
        batch_y = self.y[idx * self.batch_size:(idx + 1) *
        self.batch_size][0]

        entry_path = "/tmp/data/" + batch_x + "/"

        X_batch = np.asarray([self.__get_input(os.path.join(entry_path, filename)) for filename in os.listdir(entry_path)]) 
        return (X_batch, self.__get_output(batch_y, 49)) #49 is hardcoded here for testing purposes


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source