根据本教程,我很好地调整了gpt-2模型:
与关联的GitHub存储库:
https://github.com/nshepperd/gpt-2
我已经能够复制示例,但我的问题是我没有找到设置迭代次数的参数。 基本上,训练脚本每100次迭代显示一个样本,并每1000次迭代保存一个模型版本。但是我没有找到一个参数来训练它(例如5000次迭代)然后关闭它。
培训脚本在这里: https://github.com/nshepperd/gpt-2/blob/finetuning/train.py
编辑:
如cronoik所建议,我正在尝试将for循环替换为while。
我要添加以下更改:
1)添加一个附加参数:
parser.add_argument('--training_steps', metavar='STEPS', type=int, default=1000, help='a number representing how many training steps the model shall be trained for')
2)更改循环:
try:
for iter_count in range(training_steps):
if counter % args.save_every == 0:
save()
3)使用新参数:
python3 train.py --training_steps 300
但是我收到此错误:
File "train.py", line 259, in main
for iter_count in range(training_steps):
NameError: name 'training_steps' is not defined
答案 0 :(得分:1)
您要做的就是将while True
循环修改为for
循环:
try:
#replaced
#while True:
for i in range(5000):
if counter % args.save_every == 0:
save()
if counter % args.sample_every == 0:
generate_samples()
if args.val_every > 0 and (counter % args.val_every == 0 or counter == 1):
validation()
if args.accumulate_gradients > 1:
sess.run(opt_reset)
for _ in range(args.accumulate_gradients):
sess.run(
opt_compute, feed_dict={context: sample_batch()})
(v_loss, v_summary) = sess.run((opt_apply, summaries))
else:
(_, v_loss, v_summary) = sess.run(
(opt_apply, loss, summaries),
feed_dict={context: sample_batch()})
summary_log.add_summary(v_summary, counter)
avg_loss = (avg_loss[0] * 0.99 + v_loss,
avg_loss[1] * 0.99 + 1.0)
print(
'[{counter} | {time:2.2f}] loss={loss:2.2f} avg={avg:2.2f}'
.format(
counter=counter,
time=time.time() - start_time,
loss=v_loss,
avg=avg_loss[0] / avg_loss[1]))
counter += 1
except KeyboardInterrupt:
print('interrupted')
save()