'How to test my trained Tensor Flow model

I currently have a regression model that tries to predict a value based on 25 other ones.

Here is the code I currently gave

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
rng = np.random
learning_rate  = 0.11
training_epochs = 1000
display_step = 50
X = np.random.randint(5,size=(100,25)).astype('float32')
y_data = np.random.randint(5,size=(100,1)).astype('float32')
m = 100
epochs = 100
W = tf.Variable(tf.zeros([25,1]))
b = tf.Variable(tf.zeros([1]))
y = tf.add( tf.matmul(X,W), b)
loss = tf.reduce_sum(tf.square(y - y_data)) / (2 * m)
loss = tf.Print(loss, [loss], "loss: ")
optimizer = tf.train.GradientDescentOptimizer(.01)

train = optimizer.minimize(loss)

init = tf.initialize_all_variables()

sess = tf.Session()

for i in range(epochs):


I understand that right now these variables are all random so the accuracy would not be very good anyways, but I just want to know how to make a test set and find the accuracy of the predictions.

Solution 1:[1]

Typically, you split your training set into two pieces: roughly 2/3 for training and 1/3 for testing (opinions vary on the proportions). Train your model with the first set. Check the training accuracy (run the training set back through the model to see how many it gets right).

Now, run the remainder (test set) through the model and check how well the predictions match the results. "Find the accuracy" depends on what sort of predictions you're making: classification vs scoring, binary vs disjoint vs contiguous, etc.


This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Prune