使用训练有素的模型进行的预测与使用dnn_regressor.predict()进行的预测不同

时间:2018-09-17 20:39:52

标签: python tensorflow deep-learning

我使用张量流训练了一个简单的DNN回归模型,并给出了主要代码(在这部分我没有任何问题,仅出于一些背景信息):

def train_nn_regression_model(my_optimizer,steps,batch_size,hidden_units,
training_examples,training_targets,validation_examples,validation_targets):
  periods = 12
  steps_per_period = steps / periods
  my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
  dnn_regressor = tf.estimator.DNNRegressor(
    feature_columns=construct_feature_columns(training_examples),
    hidden_units=hidden_units,
    optimizer=my_optimizer,)
  training_input_fn = lambda: my_input_fn(training_examples, 
                                      training_targets["SR"], 
                                      batch_size=batch_size)
  predict_training_input_fn = lambda: my_input_fn(training_examples, 
                                              training_targets["SR"], 
                                              num_epochs=1, 
                                              shuffle=False)
  predict_validation_input_fn = lambda: my_input_fn(validation_examples, 
                                                validation_targets["SR"], 
                                                num_epochs=1, 
                                                shuffle=False)

  training_rmse = []
  validation_rmse = []
  for period in range (0, periods):
    # Train the model, starting from the prior state.
    dnn_regressor.train(
    input_fn=training_input_fn,
    steps=steps_per_period
    )
    # Take a break and compute predictions.
    training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
    training_predictions = np.array([item['predictions'][0] for item in training_predictions])

    validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
    validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])

    # Compute training and validation loss.
    training_root_mean_squared_error = math.sqrt(
        metrics.mean_squared_error(training_predictions, training_targets))
    validation_root_mean_squared_error = math.sqrt(
        metrics.mean_squared_error(validation_predictions, validation_targets))
    # Occasionally print the current loss.
    print("  period_t %02d : %0.2f"%(period,training_root_mean_squared_error))
    print("  period_v %02d : %0.2f"%(period,validation_root_mean_squared_error))
  print("Model training finished.")

然后,我将训练后的模型与dnn_regressor.predict()一起用于预测。我的输入包含6个分量{a1,a2,a3,a4,a5,a6}并输出一个值。对于给定的集合,我获得的预测值为41.15,目标值为41.32。 现在,我想利用所有权重和偏差来手动将这个模型构建为另一个程序中的函数,并且我将无法在该程序中使用tensorflow,因此我所拥有的只是每一层中的权重和偏差。我已经编写了以下代码来提取权重和偏差,并编写了一个简单的python代码以使用我的输入(与dnn_regressor.predict()中使用的输入相同)生成输出。

import tensorflow as tf
import numpy as np

sess = tf.Session()
saver = tf.train.import_meta_graph("model.ckpt-24000.meta")
saver.restore(sess,tf.train.latest_checkpoint('./'))

l1b = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/hiddenlayer_0/bias:0"))
l1w = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/hiddenlayer_0/kernel:0"))

l2b = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/hiddenlayer_1/bias:0"))
l2w = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/hiddenlayer_1/kernel:0"))

l3b = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/hiddenlayer_2/bias:0"))
l3w = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/hiddenlayer_2/kernel:0"))

l4b = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/hiddenlayer_3/bias:0"))
l4w = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/hiddenlayer_3/kernel:0"))

lb = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/logits/bias:0"))
lw = sess.run(tf.get_default_graph().get_tensor_by_name("dnn/logits/kernel:0"))

# Target: 41.32,  Predicted by tensorflow: 41.15, Predicted by this code: 43.16
input_data = np.array([0.137445,-0.010313,-0.071256,-0.00795,-1.054055,-0.015913])

z1 = np.maximum(np.dot(input_data,l1w)+l1b,0)
z2 = np.maximum(np.dot(z1,l2w)+l2b,0)
z3 = np.maximum(np.dot(z2,l3w)+l3b,0)
z4 = np.maximum(np.dot(z3,l4w)+l4b,0)

result = np.dot(z4,lw)+lb
print (result)

我不知道为什么使用DNNRegressor.predict()的预测值与我编写的代码不同。如果我在代码中犯了一些愚蠢的错误,请告诉我。非常感谢您的帮助。

0 个答案:

没有答案