'Why does my model gives nan and inf loss?

My model is giving inf and NaN in loss and loss_vallidation. My dataset is made by 4 clases. First I've got the following tensor types :

BDADDR    uint64
CLK       uint64
dtype: object
Z1    uint64
Z2    uint64

My inputs are BDADDR and CLK and want to predict Z1 and Z2 values. My code is :

To load data and tensors from dataset

def cargar_datos(numero):
  pd.set_option('display.max_columns',None)
  muestra = pd.read_csv("ML/diccionario/partes_original/muestra{}.csv".format(numero))

  muestras_objetivo = muestra.copy()
  
  muestras_objetivo.pop("BDADDR")
  muestras_objetivo.pop("CLK")
  muestra.pop("Z1")
  muestra.pop("Z2")
  muestra["CLK"] = tf.constant(muestra["CLK"],dtype=tf.uint64)
  muestra["BDADDR"] = tf.constant(muestra["BDADDR"],dtype=tf.uint64)
  print(muestra.dtypes)
  print(muestras_objetivo.dtypes)

  return [muestra,muestras_objetivo]

And the model is :

modelo = tf.keras.Sequential([
    tf.keras.layers.Dense(1000),
    tf.keras.layers.Dense(2)
])

informacion = cargar_datos(1)
muestra_entrenamiento = informacion[0]
objetivo = informacion[1]

modelo.compile(loss='mean_squared_error',optimizer= tf.optimizers.Adam(clipnorm=0.001),metrics=['accuracy'])
history = modelo.fit(muestra_entrenamiento,objetivo,epochs=1,validation_split=0.30)

informacion = cargar_datos(2)
muestra_entrenamiento = informacion[0]
objetivo = informacion[1]

pyplot.title('Loss / Mean Squared Error')
pyplot.plot(history.history['loss'],label='train')
pyplot.plot(history.history['val_loss'],label='test')
pyplot.legend()
pyplot.show()
modelo.summary()
print("-------------")
print(modelo.layers[0].weights)

As i looked in internet it would be probably for decreasing learning rate, but i've set,for example, 0.000000001 and still have the same error. And printed all weights to got this :

array([[-0.01891549, -0.00951883, -0.06897242, ..., -0.04570342,
        -0.01861915,  0.07007545],
       [ 0.06779067,  0.05835511,  0.05296053, ..., -0.05500597,
         0.00897066, -0.00131878]], dtype=float32)>, <tf.Variable 'dense/bias:0' shape=(1000,) dtype=float32, numpy=
array([ 18.526419  , -18.518377  , -18.580936  ,  18.338928  ,
       -18.552141  , -18.511845  , -18.072502  ,   3.828051  ,
        18.565273  , -18.412748  ,  18.552223  , -18.567282  ,
       -18.411982  ,  18.556618  , -18.558344  ,  18.30571   ,
        17.477098  ,  18.542046  ,  18.50067   , -18.511942  ,
       -15.61212   ,  18.541609  ,  18.5428    , -18.56416   ,
       -18.490417  , -18.583736  ,  15.3581705 , -18.528591  ,
        18.434353  , -18.578178  , -18.360697  ,  16.660086  ,
       -18.508707  , -18.476948  ,  -7.9554067 ,  18.39606   ,
       -18.560163  , -16.078005  ,  18.463202  , -18.582962  ,
        18.305859  , -18.553257  ,  18.547575  ,  12.988035  ,
        18.492657  ,  18.31939   ,  18.469078  , -18.016933  ,
       -17.732153  ,  17.435188  , -18.470516  ,  18.542892  ,
        17.598087  , -18.541414  , -18.516933  , -18.561699  ,
        13.897891  , -18.54232   ,  18.150578  , -17.73946   ,
        10.504892  ,  17.14249   ,   6.9288673 ,   0.11944906,
       -18.571766  ,  18.580038  , -18.08019   , -17.806236  ,

These weight should be between 0 and 1 ?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source