当我在同一个数据集上进行训练和测试时,为什么准确性绝对不同?

时间:2020-03-23 12:23:36

标签: python tensorflow2.0

当我尝试训练基于tensorflow2.0的变压器转换模型时遇到了一个令人困惑的问题。当我训练模型时,损失会得到补偿,而精度会提高到50%左右并停止。我认为这是我的模型存在的问题,但是当我尝试逐句评估最终模型的翻译时,准确性约为99.6%。可能是什么问题? 这是我程序的准确性部分,而enitire程序来自transformer Mycode如下:

for epoch in range(EPOCHS):
start = time.time()

train_loss.reset_states()
train_accuracy.reset_states()

# inp -> portuguese, tar -> english
for (batch, (inp, tar)) in enumerate(train_dataset):
    train_step(inp, inp)
with train_summary_writer.as_default():
    tf.summary.scalar('loss', train_loss.result(), step=epoch)
    tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch)

if (epoch + 1) % 5 == 0:
    ckpt_save_path = ckpt_manager.save()
    print('Saving checkpoint for epoch {} at {}'.format(epoch + 1,
                                                        ckpt_save_path))

print('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
                                                    train_loss.result(),
                                                    train_accuracy.result()))

print('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
for (batch, (inp, tar)) in enumerate(test_dataset):
    test_step(inp, inp)
with test_summary_writer.as_default():
    tf.summary.scalar('loss', train_loss.result(), step=epoch)
    tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch)
if train_loss.result()<min_loss:
    min_loss=train_loss.result()
    num_loss=0
else:
    num_loss=num_loss+1

print(' Test Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
                                                          train_loss.result(),
                                                          train_accuracy.result()))

print('Time taken for 1 test epoch: {} secs\n'.format(time.time() - start))
if num_loss>5:
    break

0 个答案:

没有答案
相关问题