Tensorflow:获得正确的NN准确度

时间:2017-02-23 09:47:12

标签: python tensorflow

我被困太久了,需要一些帮助(对于tensorflow等很新)。我修改了一个MNIST示例到我自己的数据,但即使在2个时期之后仍然保持100%的准确性 我的X(类似于MNIST)是[18,1] - 矢量,y是浮动32 变量:

n_nodes_hl1 = 100
n_nodes_hl2 = 100
n_nodes_hl3 = 50
x = tf.placeholder(shape=[None, 18], dtype=tf.float32)
y = tf.placeholder(shape=[None, 1],  dtype=tf.float32)
x_vals_train = np.array([])
y_vals_train = np.array([])
x_vals_test = np.array([])
y_vals_test = np.array([])
loss_vec = []

我的模特:

def neural_net_model(data):
  hidden_1_layer = {'weights':tf.Variable(tf.random_normal([18,n_nodes_hl1])),
                    'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))}
  hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1,n_nodes_hl2])),
                    'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))}
  hidden_3_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2,n_nodes_hl3])),
                    'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))}

  output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3,1])),
    'biases':tf.Variable(tf.random_normal([1]))}

  l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']),hidden_1_layer['biases'])
  l1 = tf.nn.relu(l1)
  l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']),hidden_2_layer['biases'])
  l2 = tf.nn.relu(l2)
  l3 = tf.add(tf.matmul(l2, hidden_3_layer['weights']),hidden_3_layer['biases'])
  l3 = tf.nn.relu(l3)

  output = tf.matmul(l3, output_layer['weights']) + output_layer['biases']

  return output

会话

def train_neural_network(x):
  prediction = neural_net_model(x)
  cost = tf.reduce_mean(tf.abs(y - prediction))
  optimizer = tf.train.AdamOptimizer(0.01).minimize(cost)

  with tf.Session() as sess:
      sess.run(tf.global_variables_initializer())
      for i in range(10):
          temp_loss = 0

          rand_index = np.random.choice(len(x_vals_train), 50)
          rand_x = x_vals_train[rand_index]
          rand_y = np.transpose([y_vals_train[rand_index]])
          _, temp_loss = sess.run(optimizer, feed_dict={x: rand_x, y: rand_y})

          if (i+1)%100==0:
            print('Generation: ' + str(i+1) + '. Loss = ' + str(temp_loss))

      # evaluate accuracy
      correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(y,1))
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
      print "accuracy %.5f'" % accuracy.eval(feed_dict={x: x_vals_test, y: np.transpose([y_vals_test])})

问题主要是为什么我总是获得100%的准确性,这显然是错误的。提前谢谢!

0 个答案:

没有答案