使用Cityscapes进行分段故障训练Deeplab

时间:2019-11-19 16:36:13

标签: python tensorflow cudnn deeplab

我目前正在按照步骤进行Deeplab训练,以对Cityscapes数据集的exception_65主干进行训练,但不幸的是,我遇到了分段错误。我无法重现该错误。例如。对PASCAL数据集的培训效果很好。我检查了tensorflow和驱动程序的路径以及几种版本以及组合,等等。即使我在没有GPU支持的情况下运行train.py脚本,也确实遇到了相同的分段错误。我在另一台PC上执行了相同的步骤,并且工作了。有人知道是什么问题吗?

我的设置:

  • Ubuntu 18.04
  • NVIDIA RTX 2080驱动程序版本为430.65(与.run文件一起安装)
  • CUDA 10.0(已安装.run文件)
  • cudnn 7.6.5
  • Python 3.6
  • tensorflow 1.15

通过运行:

python3 "${WORK_DIR}"/train.py \
  --logtostderr \
  --training_number_of_steps=${NUM_ITERATIONS} \
  --train_split="train_fine" \
  --model_variant="xception_65" \
  --atrous_rates=6 \
  --atrous_rates=12 \
  --atrous_rates=18 \
  --output_stride=16 \
  --decoder_output_stride=4 \
  --train_crop_size="769,769" \
  --train_batch_size=1 \
  --fine_tune_batch_norm=False \
  --dataset="cityscapes" \
  --tf_initial_checkpoint="${INIT_FOLDER}/deeplabv3_cityscapes_train/model.ckpt" \
  --train_logdir="${TRAIN_LOGDIR}" \
  --dataset_dir="${CITYSCAPES_DATASET}" 

我得到以下输出:

I1119 16:52:49.856512 139832269989696 learning.py:768] Starting Queues.
Fatal Python error: Segmentation fault

Thread 0x00007f2cd086b700 (most recent call first):
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/threading.py", line 296 in wait
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/queue.py", line 170 in get
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/python/summary/writer/event_file_writer.py", line 159 in run
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/threading.py", line 926 in _bootstrap_inner
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/threading.py", line 890 in _bootstrap

Thread 0x00007f2d3cc7e740 (most recent call first):
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443 in _call_tf_sessionrun
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350 in _run_fn
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365 in _do_call
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359 in _do_run
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180 in _run
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956 in run
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/contrib/slim/python/slim/learning.py", line 490 in train_step
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/contrib/slim/python/slim/learning.py", line 775 in train
  File "/home/kuschnig/tensorflow/models/research/deeplab/train.py", line 466 in main
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/absl/app.py", line 250 in _run_main
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/absl/app.py", line 299 in run
  File "/home/kuschnig/anaconda3/envs/conda-tf/lib/python3.7/site-packages/tensorflow_core/python/platform/app.py", line 40 in run
  File "/home/kuschnig/tensorflow/models/research/deeplab/train.py", line 472 in <module>
Segmentation fault (core dumped)

使用gdb的反向跟踪显示: GDB Output

4 个答案:

答案 0 :(得分:2)

我遇到了与上述相同的问题。我通过做两件事成功解决了这个问题:

  1. 确保您的tfrecord的名称(对我来说,它们的名称为train-00000-of-00010.tfrecord)与--train_split="train"相同。
  2. data_generator.py的第splits_to_sizes={'train_fine': 2975行的第splits_to_sizes={'train': 2975行中更改{

诀窍是在开始训练的train.shdata_generator.py中使用相同的名称(对我来说,tfrecord)是相同的}文件夹。

答案 1 :(得分:0)

我仍然不知道是什么原因导致了分割错误,但是对我来说,解决方案是在data_generator.py

中为城市景观指定新的数据集

答案 2 :(得分:0)

我最近在这个实验中遇到了一些问题。您能详细说明解决方案吗?

答案 3 :(得分:0)

我的问题看起来像你的,我意识到--dataset_dir应该指向包含tfrecord数据的目录,而不是城市目录本身。

用于在data_generator中检索数据的代码。

def _get_all_files(self):
    """Gets all the files to read data from.

    Returns:
      A list of input files.
    """
    file_pattern = _FILE_PATTERN
    file_pattern = os.path.join(self.dataset_dir,
                                file_pattern % self.split_name)
    return tf.gfile.Glob(file_pattern)