为什么这个tensorflow代码会崩溃?

时间:2017-03-13 15:18:01

标签: python machine-learning tensorflow

我已经建立了一个用于图像分类的玩具模型。该程序结构松散,如cifar10 tutorial。训练开始很好,但最终程序崩溃了。我已经完成了图表以防万一在某些地方添加操作,并且在张量板中它看起来很棒,但是它最终会冻结并强制重启(或者等待最终重启)。退出使它看起来像GPU内存问题,但模型很小,应该适合。如果我分配完整的GPU内存(另外4gb),它仍然会崩溃。

数据是256x256x3图像和标签存储在tfrecords文件中。训练功能代码如下:

def train():
    with tf.Graph().as_default():
         global_step = tf.contrib.framework.get_or_create_global_step()
         train_images_batch, train_labels_batch = distorted_inputs(batch_size=BATCH_SIZE)
         train_logits = inference(train_images_batch)
         train_batch_loss = loss(train_logits, train_labels_batch)
         train_op = training(train_batch_loss, global_step, 0.1)

         merged = tf.summary.merge_all()
         saver = tf.train.Saver(tf.global_variables())
         gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.75)
         sess_config=tf.ConfigProto(gpu_options=gpu_options)
         sess = tf.Session(config=sess_config)
         train_summary_writer = tf.summary.FileWriter(
         os.path.join(ROOT, 'logs', 'train'), sess.graph)
         init = tf.global_variables_initializer()

         sess.run(init)
         coord = tf.train.Coordinator()
         threads = tf.train.start_queue_runners(sess=sess, coord=coord)

         tf.Graph().finalize()
         for i in range(5540):
             start_time = time.time()
             summary, _, batch_loss = sess.run([merged, train_op, train_batch_loss])
             duration = time.time() - start_time
             train_summary_writer.add_summary(summary, i)
             if i % 10 == 0:
                 msg = 'batch: {} loss: {:.6f} time: {:.8} sec/batch'.format(
                 i, batch_loss, str(time.time() - start_time))
                 print(msg)
         coord.request_stop()
         coord.join(threads)
         sess.close() 

损失和训练操作分别是cross_entropy和adam优化器:

def loss(logits, labels):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='cross_entropy_per_example')
    xentropy_mean = tf.reduce_mean(xentropy, name='cross_entropy')
    tf.add_to_collection('losses', xentropy_mean)
    return xentropy_mean

def training(loss, global_step, learning_rate):
    optimizer = tf.train.AdamOptimizer(learning_rate)
    train_op = optimizer.minimize(loss, global_step=global_step)
    return train_op

批次是用

生成的
 def distorted_inputs(batch_size):
     filename_queue = tf.train.string_input_producer(
         ['data/train.tfrecords'], num_epochs=None)
    reader = tf.TFRecordReader()
    _, serialized_example = reader.read(filename_queue)
    features = tf.parse_single_example(serialized_example,
        features={'label': tf.FixedLenFeature([], tf.int64),
                  'image': tf.FixedLenFeature([], tf.string)})
    label = features['label']
    label = tf.cast(label, tf.int32)
    image = tf.decode_raw(features['image'], tf.uint8)
    image = (tf.cast(image, tf.float32) / 255) - 0.5
    image = tf.reshape(image, shape=[256, 256, 3])
    # data augmentation
    image = tf.image.random_flip_up_down(image)
    image = tf.image.random_flip_left_right(image)
    print('filling the queue with {} images ' \
          'before starting to train'.format(MIN_QUEUE_EXAMPLES))
    return _generate_batch(image, label, MIN_QUEUE_EXAMPLES, BATCH_SIZE)

def _generate_batch(image, label,
                    min_queue_examples=MIN_QUEUE_EXAMPLES,
                    batch_size=BATCH_SIZE):
    images_batch, labels_batch = tf.train.shuffle_batch(
        [image, label], batch_size=batch_size,
        num_threads=12, capacity=min_queue_examples + 3 * BATCH_SIZE,
        min_after_dequeue=min_queue_examples)
    tf.summary.image('images', images_batch)
    return images_batch, labels_batch

我错过了什么?

1 个答案:

答案 0 :(得分:1)

所以我解决了这个问题。这是解决方案,以防对其他人有用。 TL,DR:这是硬件问题。

具体来说,这是一个PCIe总线错误,与最多投票here的错误相同。可能这是由消息信号中断与PLX交换机不兼容引起的,如建议的here。同样在该线​​程中解决了该问题,设置内核参数pci=nommconf以禁用msi。

在Tensorflow,Torch和Theano之间,tf是唯一触发此问题的深度学习框架。为什么,我不确定。