我建立了一个神经网络模型,我想在张量流的会话中更改某个张量的值。
例如,如果我们忽略该模型以简化,但是我们需要优化此张量:
# construct an optimizer
train_op = tf.train.AdamOptimizer(learning_rate=0.05).minimize(cost)
在我可以运行模型来训练模型之后。
但是我想打开一个会话并更改张量train_op的值,例如我有这个:
with tf.Session() as sess:
#initialize all variables
tf.initialize_all_variables().run()
for i in range(iteraciones):
#Prepare input(minibach) to feed model
input_ = trainCluster0[0:len(train)]
# train model
sess.run(train_op, feed_dict={X: input_})
print(i, sess.run(cost, feed_dict={X: train}))
#Save model in last epoch
if(i == iteraciones-1):
save_path = saver.save(sess, "/tmp/model.ckpt")
print("Model saved.")
我想要这样的东西:
with tf.Session() as sess:
#initialize all variables
tf.initialize_all_variables().run()
#Change value of tensor train_op
# train_op = tf.train.AdamOptimizer(learning_rate=value).minimize(cost)
...
...
for i in range(iteraciones):
#Prepare input(minibach) to feed model
input_ = trainCluster0[0:len(train)]
# train model
sess.run(train_op, feed_dict={X: input_})
print(i, sess.run(cost, feed_dict={X: train}))
#Save last epoch and test
if(i == iteraciones-1):
save_path = saver.save(sess, "/tmp/model.ckpt")
print("Model saved.")
我该怎么做?也就是说,将模型与不同的优化参数一起重复使用。
谢谢。
答案 0 :(得分:0)
已解决:感谢@jdehesa
解决方案是将此占位符添加到模型中:
#Tensor placeholder to parametizer learning rate of optimizer
learning = tf.placeholder("float", name='learning')
# construct an optimizer
train_op = tf.train.AdamOptimizer(learning).minimize(cost)
对于sess.run:
with tf.Session() as sess:
# we need to initialize all variables
tf.initialize_all_variables().run()
RATIO = 0.001
ITERATIONS = 1000
for i in range(ITERATIONS):
#Prepare input(minibach) from feed autoencoder
input_ = trainCluster0[0:len(trainCluster0)]
# train autoencoder
sess.run(train_op, feed_dict={X: input_, learning: RATIO})
print(i, sess.run(cost, feed_dict={X: input_}))
#Save last epoch and test
if(i == ITERATIONS-1):
save_path = saver.save(sess, "/tmp/modelCluster0.ckpt")
print("Model saved.")