我有一个类似于this one的问题。
因为我的资源有限,而且我使用深度模型(VGG-16) - 用于训练三重网络 - 我想为128个大小的一个训练示例累积渐变,然后传播错误并更新权重。
我不清楚我该怎么做。我使用tensorflow但欢迎任何实现/伪代码。
答案 0 :(得分:15)
让我们来看看你喜欢的答案中提出的代码:
## Optimizer definition - nothing different from any classical example
opt = tf.train.AdamOptimizer()
## Retrieve all trainable variables you defined in your graph
tvs = tf.trainable_variables()
## Creation of a list of variables with the same shape as the trainable ones
# initialized with 0s
accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs]
zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]
## Calls the compute_gradients function of the optimizer to obtain... the list of gradients
gvs = opt.compute_gradients(rmse, tvs)
## Adds to each element from the list you initialized earlier with zeros its gradient (works because accum_vars and gvs are in the same order)
accum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)]
## Define the training step (part with variable value update)
train_step = opt.apply_gradients([(accum_vars[i], gv[1]) for i, gv in enumerate(gvs)])
第一部分基本上会在您的图表中添加新的variables
和ops
,这样您就可以
accum_ops
accum_vars
累积渐变
train_step
然后,要在培训时使用它,您必须按照以下步骤操作(仍然来自您链接的答案):
## The while loop for training
while ...:
# Run the zero_ops to initialize it
sess.run(zero_ops)
# Accumulate the gradients 'n_minibatches' times in accum_vars using accum_ops
for i in xrange(n_minibatches):
sess.run(accum_ops, feed_dict=dict(X: Xs[i], y: ys[i]))
# Run the train_step ops to update the weights based on your accumulated gradients
sess.run(train_step)
答案 1 :(得分:2)
Tensorflow 2.0兼容答案:与上面提到的Pop's Answer和Tensorflow Website中提供的解释一致,下面提到的是Tensorflow 2.0版中累积渐变的代码:
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
tvs = mnist_model.trainable_variables
accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs]
zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, tvs)
#print(grads[0].shape)
#print(accum_vars[0].shape)
accum_ops = [accum_vars[i].assign_add(grad) for i, grad in enumerate(grads)]
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
print ('Epoch {} finished'.format(epoch))
# call the above function
train(epochs = 3)
完整代码可在此Github Gist中找到。