'InvalidArgumentError: cannot compute MatMul as input #1(zero-based) was expected to be a double tensor but is a float tensor [Op:MatMul]
Here is my code:
training_loss=[]
val_loss=[]
for epoch in range(1000):
loss,v_loss = train(X_train1, y_train1,X_val1,y_val1)
training_loss.append(loss)
val_loss.append(v_loss)
print('Epoch %d: Training Loss = %.4f, validation_loss= %.4f' % (epoch, float(loss),float(v_loss)))
print('Final Weights after 100 epochs:')
print('###############################################################################')
print(weights)
print('Final Bias after 100 epochs:')
print('###############################################################################')
print(bias)
I tried casting X and y into float32, like below. Y = Y.astype('float32') X_train = X_train.astype('float32')
But I still get the error.
I was expecting this to print to the final weights and bias.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
