'Error with image segmentation with a u-net like architecture mask does not work

I've tried to compile the code from this website

https://keras.io/examples/vision/oxford_pets_image_segmentation/ the model does work and i obtain good results but the last part (to actually see the image created) does not work on jupyter notebook (it says that the kernel is overrated) and i don't know why, i tried it on multiple computers but the result is always the same

I kept the same code i just changed num_classes for 256 and I reshaped the input images in 256x25

Here is part that does not work :

// Generate predictions for all images in the validation set

val_gen = OxfordPets(batch_size, img_size, val_input_img_paths, val_target_img_paths)
val_preds = model.predict(val_gen)


def display_mask(i):
    """Quick utility to display a model's prediction."""
    mask = np.argmax(val_preds[i], axis=-1)
    mask = np.expand_dims(mask, axis=-1)
    img = PIL.ImageOps.autocontrast(keras.preprocessing.image.array_to_img(mask))
    display(img)


// Display results for validation image #10
i = 10

// Display input image
display(Image(filename=val_input_img_paths[i]))

// Display ground-truth target mask
img = PIL.ImageOps.autocontrast(load_img(val_target_img_paths[i]))
display(img)

// Display mask predicted by our model
display_mask(i)  // Note that the model only sees inputs at 150x150.

I would enjoy any help and thank you in advanced for your time.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source