'CreateML MLModel works on playground UI but not in app

I'm working on a machine learning app that classifies numbers that are hand drawn. I have made a model using CreateML that supposedly has 100% accuracy (I will admit my sample size was only about 50 images per number). When running it on my app however, it does not work. To see if it was a problem with my app, I downloaded the Apple Vision+CoreML Example Xcode project and replaced the MobileNet classifier with my own. I loaded in the images saved on my phone from my own app and the classifications were still inaccurate. What makes this interesting is that I tried testing the exact same images in the CreateML UI space on the playground where you can test images and the classification works.

TL/DR: The image classification works on the CreateML Live View on playgrounds but does not on the exact copy of the vision+coreML example project from Apple.

Here is an example of an image that I tried to classify

Here is what shows up on the app for 7, Here is what shows up on the app for 5

Here is what shows up on the playground for 7, Here is what shows up on the playground for 5



Solution 1:[1]

I had the similar issue for days, the issue is CreateML might create neural networks for BGR format and in Xcode project colorSpace works in RGB. You can test your model on Python with coremltools and PIL library.

Diagnosing The Problem

Get metadata of your model##

import coremltools
from PIL import Image

#Import your model.
mlmodel = coremltools.models.MLModel('Path/To/Your/Model.mlmodel')
    
#print your metadata of your model, you will see input colorSpace.
print(mlmodel)

Input might looks like this

input {
  name: "image"
  shortDescription: "Input image to be classified"
  type {
    imageType {
      width: 299
      height: 299
      colorSpace: BGR
      imageSizeRange {
        widthRange {
          lowerBound: 299
          upperBound: -1
        }
        heightRange {
          lowerBound: 299
          upperBound: -1
        }
      }
    }
  }
}

Convert color space of your input

img = Image.open("Path/To/Your/Image")
img = img.convert("RGBA").   
data = np.array(img) 
red, green, blue, alpha = data.T
data = np.array([blue, green, red, alpha])
data = data.transpose()
img = Image.fromarray(data)
PIL_image = Image.fromarray(np.array(image))

Predict from your model with converted image

print(str(mlmodel.predict({'image': PIL_image})) + '\n')

This time your predictions should be correct.

My Solution

Unfortunately I had to give up on CreateML, on App side I tried converting color space in PixelBuffer and even by importing OpenCV library to convert color space via casting UIImage to cv::Mat and cv::Mat to UIImage, but none of them worked for me. I solved my problem using another easy ML creation platform by Apple called Turi Create. You have to use python to interact with this API but documentation is very clear and ML templates are same with CreatML. This API is better than CreateML because you are able to interact with your model before and after training while CreateML can be very closed box even with coremltools your are not able to interact with it a lot. This API very accessible and easy for everyone, there are really good code examples and scenarios in its documentation.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Intout