'In a multiple output regression NN model, how can I train the network if each output is of a different scale?

model= Model(input=[x1,x2], output=[y1, y2])

model.compile((optimizer='sgd', 
loss=tf.keras.losses.MeanSquared)```

where y1 ranges from [0,1] and y2 ranges from [0,100], x1,x2 ranges from [0,1]



Solution 1:[1]

Normalize them separately. Get training samples' statistics first

mu = [np.mean(y1_train), np.mean(y2_train)]
std = [np.std(y1_train), np.std(y2_train)]

Then normalize both outputs for both training and validation sets

y_train_normalized = [(y[i] - mu[i]) / std[i] for i in range(2)]
y_val_normalized = [(y[i] - mu[i]) / std[i] for i in range(2)]

After training, to recover the proper prediction, simply use

y_recovered = [y_pred[i] * std[i] + mu[i] for i in range(2)]

Do the same with x if necessary (with x_train statistics of course).

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 bui