我正在查看Tensorflow的机制部分,特别是shared variables。在"问题"部分,他们正在处理卷积神经网络,并提供以下代码(通过模型运行图像):
# First call creates one set of variables.
result1 = my_image_filter(image1)
# Another set is created in the second call.
result2 = my_image_filter(image2)
如果模型是以这种方式实现的,那么学习/更新参数是不可能的,因为我的训练集中的每个图像都有一组新的参数?
编辑: 我也试过"问题"采用简单的线性回归example,这种实施方法似乎没有任何问题。训练似乎也可以在代码的最后一行显示。因此,我想知道tensorflow文档中是否存在细微的差异以及我正在做什么。 :
import tensorflow as tf
import numpy as np
trX = np.linspace(-1, 1, 101)
trY = 2 * trX + np.random.randn(*trX.shape) * 0.33 # create a y value which is approximately linear but with some random noise
X = tf.placeholder("float") # create symbolic variables
Y = tf.placeholder("float")
def model(X):
with tf.variable_scope("param"):
w = tf.Variable(0.0, name="weights") # create a shared variable (like theano.shared) for the weight matrix
return tf.mul(X, w) # lr is just X*w so this model line is pretty simple
y_model = model(X)
cost = (tf.pow(Y-y_model, 2)) # use sqr error for cost function
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # construct an optimizer to minimize cost and fit line to my data
sess = tf.Session()
init = tf.initialize_all_variables() # you need to initialize variables (in this case just variable W)
sess.run(init)
with tf.variable_scope("train"):
for i in range(100):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
print sess.run(y_model, feed_dict={X: np.array([1,2,3])})
答案 0 :(得分:9)
每个整个训练(和测试)集只需创建一次变量集。 变量范围的目标是允许参数子集的模块化,例如属于层的参数<(例如,当重复层的体系结构时,相同的名称可以是在每个图层范围内使用)。
在您的示例中,您只能在model
函数中创建参数。您可以打印出变量名称,以查看它是否已分配给指定的范围:
from __future__ import print_function
X = tf.placeholder("float") # create symbolic variables
Y = tf.placeholder("float")
print("X:", X.name)
print("Y:", Y.name)
def model(X):
with tf.variable_scope("param"):
w = tf.Variable(0.0, name="weights") # create a shared variable (like theano.shared) for the weight matrix
print("w:", w.name)
return tf.mul(X, w)
对sess.run(train_op, feed_dict={X: x, Y: y})
的调用仅根据提供的train_op
和X
值评估Y
的值。那里没有创建新的变量(包括参数);因此,它没有效果。您可以通过再次打印它们来确保变量名称保持不变:
with tf.variable_scope("train"):
print("X:", X.name)
print("Y:", Y.name)
for i in range(100):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
您将看到变量名称保持不变,因为它们已经初始化。
如果您想使用其范围检索变量,则需要在get_variable
附件中使用tf.variable_scope
:
with tf.variable_scope("param"):
w = tf.get_variable("weights", [1])
print("w:", w.name)