将tf.train.exponential_decay与预定义的估算工具一起使用?

时间:2018-03-11 19:27:47

标签: tensorflow tensorflow-estimator

我正在尝试将tf.train.exponential_decay与预定义的估算器一起使用,由于某种原因,这被证明是非常困难的。我在这里错过了什么吗?

以下是我的旧代码,其中包含常量learning_rate:

classifier = tf.estimator.DNNRegressor(
    feature_columns=f_columns,
    model_dir='./TF',
    hidden_units=[2, 2],
    optimizer=tf.train.ProximalAdagradOptimizer(
      learning_rate=0.50,
      l1_regularization_strength=0.001,
    ))

现在我尝试添加这个:

starter_learning_rate = 0.50
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
                                           10000, 0.96, staircase=True)
但现在呢?

  • estimator.predict()不接受global_step所以它会被卡在0?
  • 即使我将learning_rate传递给tf.train.ProximalAdagradOptimizer(),我也会收到错误
  

“ValueError:Tensor(”ExponentialDecay:0“,shape =(),dtype = float32)   必须来自同一图表   Tensor(“dnn / hiddenlayer_0 / kernel / part_0:0”,shape =(62,2),   D型= float32_ref)。“

非常感谢您的帮助。我正在使用TF1.6顺便说一句。

1 个答案:

答案 0 :(得分:0)

您应该让优化器在== tf.estimator.ModeKeys.TRAIN模式下运行

这是示例代码

def _model_fn(features, labels, mode, config):

    # xxxxxxxxx
    # xxxxxxxxx

    assert mode == tf.estimator.ModeKeys.TRAIN

    global_step = tf.train.get_global_step()
    decay_learning_rate = tf.train.exponential_decay(learning_rate, global_step, 100, 0.98, staircase=True)
    optimizer = adagrad.AdagradOptimizer(decay_learning_rate)

    update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
    with tf.control_dependencies(update_ops):
         train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
    return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op, training_chief_hooks=chief_hooks, eval_metric_ops=metrics)