为了减少分布式训练中的同步次数,我想先进行梯度的局部累积。就像您可以拥有多个GPU,但是串行而不是并行。
我想在带有分配策略的estimator.train循环中使用它,例如镜像和集体减少等。
这是我的实现,请给我一些输入信息:)
首先,因为我需要在session.run()中运行其他图形,所以我修改了estimator.EstimatorSpec以进行更多操作。其次,似乎没有明确的方法可以在分发策略环境中的本地GPU中创建本地的,非共享变量。我不得不破解一些variable_create_scope。
这是被黑的variable_creator函数,
def skip_all_scope_variable_creator(next_creator=None, on_device=None, **kwargs):
#print("skip_all_scope_variable_creator:[{}]".format(kwargs))
initial_value = kwargs.get("initial_value", None)
trainable = kwargs.get("trainable", None)
collections = kwargs.get("collections", None)
validate_shape = kwargs.get("validate_shape", True)
caching_device = kwargs.get("caching_device", None)
name = kwargs.get("name", None)
variable_def = kwargs.get("variable_def", None)
dtype = kwargs.get("dtype", None)
expected_shape = kwargs.get("expected_shape", None)
import_scope = kwargs.get("import_scope", None)
constraint = kwargs.get("constraint", None)
use_resource = kwargs.get("use_resource", None)
with tf.device(on_device) :
return resource_variable_ops.ResourceVariable(
initial_value=initial_value, trainable=trainable,
collections=collections, validate_shape=validate_shape,
caching_device=caching_device, name=name, dtype=dtype,
constraint=constraint, variable_def=variable_def,
import_scope=import_scope)
这是我在model_fn()中创建三个操作的代码,
loss = loss_from_model
optimizer = some_optimizer
tvars = tf.trainable_variables()
gradients = optimizer.compute_gradients(
loss, tvars, colocate_gradients_with_ops=True)
accumulate_pass_num = FLAGS.pass_per_batch
if accumulate_pass_num > 1 :
accum_grads = []
accum_vars = []
reset_grad_ops = []
accum_grad_ops = []
for g,v in gradients:
accum_vars.append(v)
if g is not None:
with tf.variable_creator_scope(lambda next_creator=None, **kwargs: skip_all_scope_variable_creator(next_creator, g.device, **kwargs)):
print("create accum_grad for variable:{}".format(v.name))
tmp_grad_on_device = tf.Variable(tf.zeros_like(g), trainable=False, synchronization=tf.VariableSynchronization.ON_READ, collections=[tf.GraphKeys.LOCAL_VARIABLES], name='tmp_accum_grad')
reset_one_grad_op = tf.assign(tmp_grad_on_device, g, name="reset_accumulated_gradient_op")
reset_grad_ops.append(reset_one_grad_op)
# the return of assign_add is the value will be update
accum_grad_on_device = tmp_grad_on_device.assign_add(g, name="accumulate_gradient")
accum_grad_ops.append(accum_grad_on_device)
accum_grads.append(accum_grad_on_device)
else:
accum_grads.append(None)
accumulate_gradients_op = tf.group(*accum_grad_ops, name="grouped_accu_grad_op")
reset_gradients_op = tf.group(*reset_grad_ops, name="grouped_reset_gradients_op")
accum_grad_means = [tf.multiply(v, 1.0/accumulate_pass_num) if v is not None else None for v in accum_grads]
accum_grads_vars = zip(accum_grad_means, accum_vars)
minimize_op = optimizer.apply_gradients(
accum_grads_vars, global_step=global_step, name="train")
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
train_op = tf.group(minimize_op, update_ops)
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op, accumulate_gradients_op=accumulate_gradients_op, reset_gradients_op=reset_gradients_op, accumulate_pass_num=accumulate_pass_num)
这里修改了estimator.train()以运行不同的操作,
while not mon_sess.should_stop():
if estimator_spec.accumulate_pass_num > 1 :
# reset gradiends first
mon_sess.run([estimator_spec.reset_gradients_op])
for _ in range(estimator_spec.accumulate_pass_num-2):
mon_sess.run([estimator_spec.accumulate_gradients_op])
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
我在Google官方模型存储库中的转换器模型上进行了尝试。结果很好。
我的问题是,还有更好的方法吗?
我应该考虑使用tf.cond()选择在model_fn中返回的操作,以便不需要修改Estimator和EstimatorSpec吗?但这似乎很困难:(
非常感谢您!
东
答案 0 :(得分:1)
我认为您可以通过将train_ops传递给估算器来实现。 在估算器model_fn内部单独调用tensorflow操作绝对没有效果。 因为根据设计,每次训练一次只会调用一次model_fn,因此您输入的每个操作也只会执行一次。除此之外,还将在model_fn调用期间评估并执行所有tf.cond分支(您可以通过简单的条件记录操作来验证此行为)。 实现梯度累积的关键是:
传递给estimator_spec或training_hooks的那些操作可以在训练过程中动态执行。
这是我的代码,在GPU内存有限的情况下微调BERT:
# compute batch gradient
grads = tf.gradients(loss, tvars)
(grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)
# this is a list of sum(dy/dx) for each variable that must be paired with a tvars list.
# element may be an IndexedSlices object that does not support assignning, e.g. [g.assign(value) for g in grads]
# some of the elements are None, meaning y and x does not depend on each other.
# Nonetypes must be handled using Python, tensorflow cannot convert Nonetypes to 0.
# declare a temp variable for summation
sum_gradient = [tf.get_variable(name="sum_grads" + str(i), shape=tv.shape,
initializer=tf.zeros_initializer,
trainable=False,
dtype=tf.float32,
collections=[tf.GraphKeys.LOCAL_VARIABLES]) for i, tv in enumerate(tvars)]
sum_ops = []
unused_variable_in_batch = []
# gradient accumulation
for i, gv in enumerate(grads):
if gv is not None:
sum_ops.append(sum_gradient[i].assign_add(gv, name="accumulate_gradient"))
else:
unused_variable_in_batch.append(sum_gradient[i])
sum_gradient[i] = None
# NOTE : calling .assign_add does NOTHING in estimator, must wrap them all and handle them via train_ops
def apply_accumulated_gradients(sums):
# normalize gradient
normalize_ops = []
for i, g in enumerate(sums):
if g is not None:
normalize_ops.append(sums[i].assign(tf.multiply(g, 1 / gradient_accmulation_multiplier)))
# assign to make sure it still is a variable, or else it will become a Tensor
with tf.control_dependencies(normalize_ops):
minimize_op = optimizer.apply_gradients(zip(sums, tvars), global_step=global_step)
return tf.group(minimize_op, *normalize_ops, name="apply_accumulated_gradients")
train_op = tf.cond(tf.math.equal(global_step % gradient_accmulation_multiplier, 0),
lambda: apply_accumulated_gradients(sum_gradient),
lambda: optimizer.apply_gradients(zip([None for _ in grads], tvars), global_step=global_step))
# reset accumulation when necessary
def reset():
counter = 0
for i, s in enumerate(sum_gradient):
if s is None:
# restore reference from None to the original variable
sum_gradient[i] = unused_variable_in_batch[counter]
counter += 1
return tf.group([s.assign(tf.zeros_like(s)) for s in sum_gradient])
with tf.control_dependencies([train_op]):
reset_ops = tf.cond(tf.math.equal(do_update, 1.),
reset,
tf.no_op)
# the 2 branches must have identical structure, [op1, op2, ...] || no_op cannot be valid cond branch.
# tf.group to convert all resets into 1 op and match with no_op: tf.group() || np_op
# Increment global step
new_global_step = global_step + 1
train_op = tf.group(*sum_ops, [train_op, global_step.assign(new_global_step), reset_ops])
logging_hook = tf.train.LoggingTensorHook({"accuracy": "acc"},
every_n_iter=gradient_accmulation_multiplier)
output_spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
training_hooks=[logging_hook, accumulation_hook] # wrap with a list
)
我在批处理渐变上应用了裁剪,并简单地取了它们的平均值。这种方法对我有用,但是我建议您在数据集上密切观察损失行为。
另外,关于tf.cond(tf.math.equal(do_update,1。),...,...),do_update是一个由Hook管理的变量,对于每一gradient_accmulation_multiplier步骤,其取值为1,因此,此语句与tf.math.equal(global_step%gradient_accmulation_multiplier,0)的作用完全相同。这只是另一种方式。
挂钩的代码如下:
class GradientAccumulationHook(session_run_hook.SessionRunHook):
"""
Puts a certain tf.Variable to 1 once every certain steps.
"""
def __init__(self, frequency, variable):
self._step = 0
self._flag = 0.
self._freq = frequency
self._input_placeholder = tf.placeholder(tf.float32)
self.assign_op = variable.assign(self._input_placeholder)
def begin(self):
# a hook can modify graph at begin(), after this the graph will be finalized
self._step = tf.train.get_global_step()
def before_run(self, run_context):
step = run_context.session.run(self._step) # evaluate tensor to get a step number
self._flag = 1. if step % self._freq == 0 and step != 0 else 0.
run_context.session.run(self.assign_op, feed_dict={self._input_placeholder: self._flag})