Tensorflow编码优化:如何更有效地实施这种代码?

时间:2018-09-20 18:18:59

标签: tensorflow

我有一个外环训练我的模型并更新我的x;但是,外循环中的每个迭代;它需要训练另一个模型并训练内部模型;它需要当前外部迭代的值x

一般框架如下

    with tf.Session() as sess:
        # do some initial computation
        x = ......

        for i in (range(iters)):
            loss = func(x) # compute the loss function
            train_op = tf.train.AdamOptimizer(1e-4).minimize(loss)
            sess.run(tf.global_variables_initializer())

            for j in (range(train_steps)):
                per_loss = session.run([loss])
                sess.run([train_op])

             # update x
             x = .....

此实现非常慢;所以我决定使用占位符

x_placeholder = tf.placeholder(tf.float64,....)
loss = func(x_placholder)
train_op = tf.train.AdamOptimizer(1e-4).minimize(loss)

with tf.Session() as sess:
    # do some initial computation
    x = ......

    for i in (range(iters)):
       sess.run(tf.global_variables_initializer())

        for j in (range(train_steps)):
            per_loss = session.run([train_op, loss],feed_dict={x_placeholder:x})

         # update x
         x = .....

但是,这给我以下错误

raise ValueError("No variables to optimize.")
ValueError: No variables to optimize.

运行行train_op = tf.train.AdamOptimizer(1e-4).minimize(loss)

因此,我不确定如何以非常有效的方式正确实施此操作。任何想法

谢谢

1 个答案:

答案 0 :(得分:0)

像这样吗?

x_placeholder = tf.placeholder(tf.float64,....)
loss = func(x_placholder)
train_op = tf.train.AdamOptimizer(1e-4).minimize(loss)

with tf.Session() as sess:
        # do some initial computation
        x = ......

        for i in (range(iters)):
           sess.run(tf.global_variables_initializer())

            for j in (range(train_steps)):
                per_loss = session.run([train_op, loss],feed_dict={x_placeholder:x})

             # update x
             x = .....