当我们已经知道它的价值时,为什么要使用占位符?

时间:2018-09-03 00:22:33

标签: tensorflow

我正在阅读“动手学习以及使用Scikit-Learn和TensorFlow”。这是用于重用其他框架中的模型的代码。我们已经知道了original_w和original_b,我们想将它们分别分配给hidden1 / weights和hidden1 / biases。我的问题是“为什么要使用original_weights占位符?我们可以直接使用tf.assign(hidden1_weights,original_w)吗?”

original_w = [...] # Load the weights from the other framework
original_b = [...] # Load the biases from the other framework
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
hidden1 = fully_connected(X, n_hidden1, scope="hidden1"
[...] # Build the rest of the model
# Get a handle on the variables created by fully_connected() 
with tf.variable_scope("", default_name="", reuse=True): # root scope
hidden1_weights = tf.get_variable("hidden1/weights") 
hidden1_biases = tf.get_variable("hidden1/biases")

# Create nodes to assign arbitrary values to the weights and biases 
original_weights = tf.placeholder(tf.float32, shape=(n_inputs, n_hidden1)) 
original_biases = tf.placeholder(tf.float32, shape=(n_hidden1)) 

assign_hidden1_weights = tf.assign(hidden1_weights,original_weights) 
assign_hidden1_biases = tf.assign(hidden1_biases, original_biases)


init = tf.global_variables_initializer()
with tf.Session() as sess: sess.run(init) 
    sess.run(assign_hidden1_weights, feed_dict={original_weights: original_w}) 
    sess.run(assign_hidden1_biases, feed_dict={original_biases: original_b}) 
    [...] # Train the model on your new task

1 个答案:

答案 0 :(得分:0)

是的,您可以先运行sess.run(tf.assign(...)),然后运行目标。