'python CNN,why i get different results in different desktop?what can i do to get the same result in different machine

I run the same code and the same dataset to train the CNN(Convolutional Neural Networks) and CPU used only(NO GPU). I have set the random seed so I can get the same result in one machine each time I run the code.

seed_value= 0

import os

os.environ['PYTHONHASHSEED']=str(seed_value)

os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

import random

random.seed(seed_value)

import numpy as np

np.random.seed(seed_value)

But I do not know why the result is different when I run the same code in different machine.what can i do? The code is as fellow:

def CNNGETPREDICTVAL(train_xx,train_yy,test_xx,inner_fac_len,loop_lr,nvl_val_1,nvl_val_2,nvl_val_3,loop_dst_num): train_xx=train_xx.drop('date_time',axis=1) test_xx=test_xx.drop(['date_time','key_0'],axis=1)

x_train = train_xx.values.reshape(-1, 1,inner_fac_len,1)
y_train=keras.utils.np_utils.to_categorical(train_yy, num_classes = 3)
x_test = test_xx.values.reshape(-1, 1,inner_fac_len,1)

model = keras.models.Sequential()
init_info=keras.initializers.RandomNormal(mean=0.0,stddev=0.05,seed=2021)
model.add(keras.layers.Conv2D(nvl_val_1, (1, 3), activation='relu',padding='same', input_shape=(1, inner_fac_len, 1),kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))

model.add(keras.layers.Conv2D(nvl_val_2, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))

model.add(keras.layers.Conv2D(nvl_val_3, (1, 3), activation='relu',padding='same',kernel_initializer=init_info))
model.add(keras.layers.MaxPooling2D(pool_size=(1, 3)))

model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(loop_dst_num, activation='relu',kernel_initializer=init_info))
model.add(keras.layers.Dense(3, activation='softmax',kernel_initializer=init_info))

my_optimizer =tf.optimizers.Adam(learning_rate=loop_lr)
model.compile(optimizer=my_optimizer, loss='mse')

model.fit(x_train, y_train, batch_size=512, epochs=10)
result = model.predict(x_test,batch_size=512,verbose=0)

return result


Solution 1:[1]

You are using Keras, so you should fix its backend random seed, besides python, random and numpy seeds. If you use tensorflow as your backend (by default it's tensorflow, but may be theano):

# Seed value
# Apparently you may use different seed values at each stage
seed_value= 0

# 1. Set `PYTHONHASHSEED` environment variable at a fixed value
import os
os.environ['PYTHONHASHSEED']=str(seed_value)

# 2. Set `python` built-in pseudo-random generator at a fixed value
import random
random.seed(seed_value)

# 3. Set `numpy` pseudo-random generator at a fixed value
import numpy as np
np.random.seed(seed_value)

# 4. Set the `tensorflow` pseudo-random generator at a fixed value
import tensorflow as tf
tf.random.set_seed(seed_value)
# for later versions: 
# tf.set_random_seed(seed_value)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 K0mp0t