TPU突然停止训练

时间:2019-11-18 15:05:19

标签: google-compute-engine tpu google-cloud-tpu

我正在按照official tutorial中的说明尝试使用Google Cloud中的TPU训练变压器模型。

运行后,加载数据工作正常
t2t-trainer \
  --model=transformer \
  --hparams_set=transformer_tpu \
  --problem=translate_ende_wmt32k_packed \
  --train_steps=500000 \
  --eval_steps=3000 \
  --data_dir=$DATA_DIR \
  --output_dir=$OUT_DIR \
  --use_tpu=True \
  --cloud_tpu_name=$TPU_NAME

培训确实按预期开始,并且输出可能看起来像这样:

I1118 14:48:18.978163 140580835792320 tpu_estimator.py:2307] global_step/sec: 15.2942                                                                                                                                                   [114/1944]
INFO:tensorflow:examples/sec: 978.827                                                                                             
I1118 14:48:18.978595 140580835792320 tpu_estimator.py:2308] examples/sec: 978.827                                                
INFO:tensorflow:Enqueue next (100) batch(es) of data to infeed.                                               
I1118 14:48:18.979720 140580835792320 tpu_estimator.py:600] Enqueue next (100) batch(es) of data to infeed.                       
INFO:tensorflow:Dequeue next (100) batch(es) of data from outfeed.                                                                
I1118 14:48:18.979935 140580835792320 tpu_estimator.py:604] Dequeue next (100) batch(es) of data from outfeed.
I1118 14:48:24.292932 140577566803712 transport.py:157] Attempting refresh to obtain initial access_token                         
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-8 in state READY, and health HEALTHY.                                         
W1118 14:48:24.353135 140577566803712 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-8 in state READY, and health HEALTHY.
INFO:tensorflow:loss = 1.8486812, step = 113800 (6.536 sec)                                                                       
I1118 14:48:25.512768 140580835792320 basic_session_run_hooks.py:260] loss = 1.8486812, step = 113800 (6.536 sec)                 
INFO:tensorflow:global_step/sec: 15.2986                                                                 
I1118 14:48:25.514695 140580835792320 tpu_estimator.py:2307] global_step/sec: 15.2986                                             
INFO:tensorflow:examples/sec: 979.11                                                                                              
I1118 14:48:25.515115 140580835792320 tpu_estimator.py:2308] examples/sec: 979.11                                
INFO:tensorflow:Enqueue next (100) batch(es) of data to infeed.                                                                   
I1118 14:48:25.516618 140580835792320 tpu_estimator.py:600] Enqueue next (100) batch(es) of data to infeed.                       
INFO:tensorflow:Dequeue next (100) batch(es) of data from outfeed.                                       
I1118 14:48:25.516829 140580835792320 tpu_estimator.py:604] Dequeue next (100) batch(es) of data from outfeed.                    
INFO:tensorflow:Outfeed finished for iteration (388, 47)                                                                          
I1118 14:48:28.761935 140577575196416 tpu_estimator.py:279] Outfeed finished for iteration (388, 47)       
INFO:tensorflow:loss = 1.5237397, step = 113900 (6.573 sec)                                                                       
I1118 14:48:32.086134 140580835792320 basic_session_run_hooks.py:260] loss = 1.5237397, step = 113900 (6.573 sec)

但是,有时并且经过不确定的迭代次数(有时少于25k,有时超过400k,有时从不),训练突然停止。没有错误信息,但是没有更多的进展。在这种情况下,我得到以下输出:

I1120 13:40:33.828651 140684764419520 tpu_estimator.py:2307] global_step/sec: 16.3988
INFO:tensorflow:examples/sec: 1049.52
I1120 13:40:33.829339 140684764419520 tpu_estimator.py:2308] examples/sec: 1049.52
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
I1120 13:40:33.830607 140684764419520 tpu_estimator.py:600] Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
I1120 13:40:33.830862 140684764419520 tpu_estimator.py:604] Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Outfeed finished for iteration (7, 0)
I1120 13:40:34.267921 140681504278272 tpu_estimator.py:279] Outfeed finished for iteration (7, 0)
I1120 13:40:39.989195 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:40:40.056418 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:41:10.124164 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:41:10.177670 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:41:40.259634 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:41:40.309398 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:42:10.377460 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health UNKNOWN.
W1120 13:42:10.431982 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health UNKNOWN.
I1120 13:42:40.508342 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:42:40.567739 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:43:10.638391 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:43:10.694900 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:43:40.763782 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:43:40.810777 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:44:10.889873 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:44:10.942733 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:44:41.011034 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:44:41.066553 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.

请注意,报告的健康状况为UNKNOWN一次,可能与此问题相关,也可能无关。

要继续训练,我必须停止该过程,然后再次运行训练命令。然后它将加载最新的检查点并继续训练,直到最终再次停止。

我正在tmux会话中运行训练命令,因此这不应由我与Google Cloud之间的连接问题引起。实际上,我可以完全关闭所有窗口,并从另一台PC连接到正在运行的培训课程。

我已经看到了问题TPU training freezes in the middle of training,但是我使用的是预定义模型,并且我的存储桶在同一区域中定义(us-central1-a中的TPU,us-central1中的存储桶)。

编辑:如果相关,我目前正在免费试用1个月,我是通过申请TensorFlow Research Cloud项目获得的。也许这些群集节点的稳定性不如付费群集节点稳定?

Edit2:也许这与GitHub问题TPU dies after 3hrs (e.g. with no 'health' state)(和the follow up)有关?请注意,问题已解决,但给出的答案似乎与问题无关。另外,我已经在我的云VM中检查了文件/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tpu/preempted_hook.py,并且两个链接的更改都已合并。

2 个答案:

答案 0 :(得分:0)

在TFRC的TPU培训时,我遇到了同样的问题。如警告所述,即使我们按照指示进行操作,TPU和Google Cloud之间的连接似乎也有问题。

我尝试了几种解决方案:

  • 删除gcloud配置文件夹

      

    rm -rf〜/ .config / gcloud

  • 更新gcloud sdk:

      

    gcloud组件更新

  • 通过IAM允许TPU访问Cloud Bucket link

TPU挂起错误仍然会发生,但发生频率较低。希望它可以对您的情况有所帮助,或者您可以找到通用的解决方案。

答案 1 :(得分:0)

据报道这是GitHub(#1#2)上的一个错误,随后已修复。 如果错误仍然存​​在,则应回复第二个GitHub问题。请注意,您可能必须重新创建TPU,仅重新启动它可能还不够。