如何解决-Tensorflow NN回归平均训练示例

时间:2019-02-01 19:26:19

标签: python tensorflow

我建立一个神经网络的回归在Tensorflow的项目,但我很新的包。作为起点,我只是尝试从2个输入要素过渡到2个输出预测指标。当我给网1个训练例如,学习像我期望的那样。但是,当我给出多个示例时,似乎是对所有给定示例进行平均。然而,该预测是不相同对于每个输入,但可以有所不同。

例如,当我将这些值用于X_train和y_train时:

X_train = np.array([[1., 2.], [3., 4.]])
y_train = np.array([[5., 6.], [7., 8.]])

和运行下面的代码,试图将网过度拟合到这些2个实施例中,

def neural_net_model(X_data, input_dim, output_dim, hidden_dim):
    W_1 = tf.Variable(tf.random_uniform([input_dim, hidden_dim]), name = 'W1')
    b_1 = tf.Variable(tf.zeros([hidden_dim]), name = 'b1')
    layer_1 = tf.add(tf.matmul(X_data, W_1), b_1)
    layer_1 = tf.nn.sigmoid(layer_1)

    W_2 = tf.Variable(tf.random_uniform([hidden_dim, hidden_dim]), name = 'W2')
    b_2 = tf.Variable(tf.zeros([hidden_dim]), name = 'b2')
    layer_2 = tf.add(tf.matmul(layer_1, W_2), b_2)
    layer_2 = tf.nn.sigmoid(layer_2)

    W_3 = tf.Variable(tf.random_uniform([hidden_dim, hidden_dim]), name = 'W3')
    b_3 = tf.Variable(tf.zeros([hidden_dim]), name = 'b3')
    layer_3 = tf.add(tf.matmul(layer_2, W_3), b_3)
    layer_3 = tf.nn.sigmoid(layer_3)

    W_O = tf.Variable(tf.random_uniform([hidden_dim, output_dim]), name = 'W0')
    b_O = tf.Variable(tf.zeros([output_dim]), name = 'b0')
    output = tf.add(tf.matmul(layer_3, W_O), b_O)

    return output

learning_rate = 0.1
epochs = 200
batch_size = 1

lx = X_train.shape[1]
ly = y_train.shape[1]
lh = 20 # hidden layer size

xs = tf.placeholder('float')  # lx features
ys = tf.placeholder('float')  # ly outputs

output = neural_net_model(xs, lx, ly, lh)
cost = tf.losses.mean_squared_error(ys, output)

train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
saver = tf.train.Saver()

###################### Initialize, Accuracy and Run #################


c_train = []

# run
with tf.Session() as sess:
  init_op = tf.global_variables_initializer()
  sess.run(init_op)
  total_batch = int(len(y_train) / batch_size)
  for epoch in range(epochs):
    avg_cost = 0
    for i in range(total_batch):
      batch_x, batch_y = X_train[i * batch_size:min(i * batch_size + batch_size, len(X_train)), :], \
                       y_train[i * batch_size:min(i * batch_size + batch_size, len(y_train)), :]
      sess.run([cost, train], feed_dict={xs: batch_x, ys: batch_y})

    pred = sess.run(output, feed_dict={xs:X_train})
    c_train.append(sess.run(cost, feed_dict={xs:X_train,ys:y_train}))

    if epoch % 50 == 0:
      print('Epoch :',epoch,'Cost Train:',c_train[epoch])

  pred = sess.run(output, feed_dict={xs:X_train})

  for i in range(2):
    print("-------- Example " + str(i) + "----------")
    print("X:")
    print(X_train[i, :])
    print("y:")
    print(y_train[i, :])
    print("y_hat:")
    print(pred[i, :])

我得到以下输出:

('Epoch :', 0, 'Cost Train:', 29.407661)
('Epoch :', 50, 'Cost Train:', 1.0001459)
('Epoch :', 100, 'Cost Train:', 1.0000317)
('Epoch :', 150, 'Cost Train:', 1.0000396)
-------- Example 0----------
X:
[1. 2.]
y:
[5. 6.]
y_hat:
[6.0066676 7.0067225]
-------- Example 1----------
X:
[3. 4.]
y:
[7. 8.]
y_hat:
[6.006668  7.0067234]

我在做什么错?非常感谢提前对你的帮助!

0 个答案:

没有答案
相关问题