Tensorflow:InvalidArgumentError:预期图像(JPEG,PNG或GIF),得到空文件

时间:2018-03-12 02:49:39

标签: python python-3.x tensorflow

我是个初学者。当我学习了tensorflow的程序员指南时,我试图定义一个用于估算器的数据集_input_fn函数。我得到了一个奇怪的错误,显示:

  

INFO:tensorflow:使用默认配置。

     

INFO:tensorflow:使用config:{' _model_dir':' / model',   ' _tf_random_seed':无,' _save_summary_steps':100,   ' _save_checkpoints_steps':无,' _save_checkpoints_secs':600,   ' _session_config':无,' _keep_checkpoint_max':5,   ' _keep_checkpoint_every_n_hours':10000,' _log_step_count_steps':100,   ' _service':无,' _cluster_spec':   ,' _task_type':' worker',' _task_id':0,    ' _global_id   ' _is_chief':是的,' _num_ps_replicas':0,' _num_worker_replicas':1}

     

INFO:tensorflow:调用model_fn。

     

INFO:tensorflow:完成调用model_fn。

     

INFO:tensorflow:创建CheckpointSaverHook。

     

INFO:tensorflow:图表已经完成。

     

2018-03-12 10:22:14.699465:我   C:\ tf_jenkins \工作空间\ REL-WIN \中号\ Windows \ PY \ 36 \ tensorflow \芯\平台\ cpu_feature_guard.cc:140]   您的CPU支持此TensorFlow二进制文件不支持的指令   编译使用:AVX2

     

INFO:tensorflow:运行local_init_op。

     

INFO:tensorflow:完成运行local_init_op。

     

2018-03-12 10:22:15.913858:W   C:\ tf_jenkins \工作空间\ REL-WIN \中号\ Windows \ PY \ 36 \ tensorflow \核心\框架\ op_kernel.cc:1202]   在iterator_ops.cc:870,OP_REQUIRES失败:参数无效:预期   图像(JPEG,PNG或GIF),得到空文件[[Node:DecodeJpeg =   DecodeJpegacceptable_fraction = 1,channels = 0,dct_method ="",   fancy_upscaling = true,ratio = 1,   try_recover_truncated =假]]

     

Traceback(最近一次调用最后一次):文件   " F:\ Anaconda3 \ lib中\站点包\ tensorflow \蟒\客户\ session.py&#34 ;,   第1361行,在_do_call中       返回fn(* args)文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py",   第1340行,在_run_fn中       target_list,status,run_metadata)文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ framework \ errors_impl.py",   第516行,退出       c_api.TF_GetCode(self.status.status))

     

tensorflow.python.framework.errors_impl.InvalidArgumentError:预期   图像(JPEG,PNG或GIF),得到空文件[[Node:DecodeJpeg =   DecodeJpegacceptable_fraction = 1,channels = 0,dct_method ="",   fancy_upscaling = true,ratio = 1,   try_recover_truncated =假]]       [[Node:IteratorGetNext =   IteratorGetNextoutput_shapes = [[?,28,28,1],[?]],   output_types = [DT_FLOAT,DT_INT32],   _device =" /作业:本地主机/复制:0 /任务:0 /装置:CPU:0"]]   在处理上述异常期间,发生了另一个异常:

     

回溯(最近一次呼叫最后一次):文件" F:\ Program   文件\ JetBrains公司\ PyCharm   在run_file中的2017.3.3 \ helpers \ pydev \ pydev_run_in_console.py",第53行       pydev_imports.execfile(文件,全局,本地)#执行脚本文件" F:\ Program Files \ JetBrains \ PyCharm   2017.3.3 \ helpers \ pydev_pydev_imps_pydev_execfile.py",第18行,在execfile中       exec(编译(内容+" \ n",文件,' exec'),glob,loc)文件" E:/ Learning_process / semester2018_spring / deep_learning / meituan / MNIST / demo_cnn_mnist_meituan.py&#34 ;,   201行,在       tf.app.run(main)File" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ platform \ app.py",   第126行,在运行中       _sys.exit(main(argv))File" E:/Learning_process/semester2018_spring/deep_learning/meituan/MNIST/demo_cnn_mnist_meituan.py",   第195行,主要       步骤= 50)文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ estimator \ estimator.py",   352号线,在火车上       loss = self._train_model(input_fn,hooks,saving_listeners)文件   " F:\ Anaconda3 \ lib中\站点包\ tensorflow \蟒\估计\ estimator.py&#34 ;,   第891行,在_train_model中       _,loss = mon_sess.run([estimator_spec.train_op,estimator_spec.loss])文件   " F:\ Anaconda3 \ lib中\站点包\ tensorflow \蟒\训练\ monitored_session.py&#34 ;,   第546行,在运行中       run_metadata = run_metadata)文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py",   第1022行,在运行中       run_metadata = run_metadata)文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py",   第1113行,在运行中       提高six.reraise(* original_exc_info)文件" F:\ Anaconda3 \ lib \ site-packages \ six.py",第693行,再加注       提升值文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py",   第1098行,在运行中       return self._sess.run(* args,** kwargs)File" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py",   第1170行,在运行中       run_metadata = run_metadata)文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py",   第950行,在运行中       return self._sess.run(* args,** kwargs)File" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py",   第905行,在运行中       run_metadata_ptr)文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py",   第1137行,在_run       feed_dict_tensor,options,run_metadata)文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py",   第1355行,在_do_run中       options,run_metadata)文件" F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py",   第1374行,在_do_call中       raise type(e)(node_def,op,message)tensorflow.python.framework.errors_impl.InvalidArgumentError:Expected   图像(JPEG,PNG或GIF),得到空文件[[Node:DecodeJpeg =   DecodeJpegacceptable_fraction = 1,channels = 0,dct_method ="",   fancy_upscaling = true,ratio = 1,   try_recover_truncated =假]]       [[Node:IteratorGetNext =   IteratorGetNextoutput_shapes = [[?,28,28,1],[?]],   output_types = [DT_FLOAT,DT_INT32],   _device =" /作业:本地主机/复制:0 /任务:0 /装置:CPU:0"]]   PyDev控制台:使用IPython 6.1.0

代码如下:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

# Imports
import numpy as np
import os
import tensorflow as tf
import argparse

parser = argparse.ArgumentParser()
# parser.add_argument("--batch_size", default=100, type=int, help='batch_size')
# parser.add_argument("--train_steps", default=1000, type=int, help="train_steps")
parser.add_argument("--model_dir", default='/model', type=str, help='model_dir')
parser.add_argument("--data_dir", default='', type=str, help="data_dir")


def cnn_model(features, labels, mode):
    """

    :param features:
    :param labels:
    :param mode:
    :return:
    """

    # input
    input_layer = tf.reshape(features['image'], [-1, 28, 28, 1])

    conv1 = tf.layers.conv2d(inputs=input_layer,
                             filters = 32,
                             kernel_size=[5, 5],
                             padding='same',
                             activation=tf.nn.relu)

    pool1 = tf.layers.max_pooling2d(inputs=conv1,
                                    pool_size=[2, 2],
                                    strides=2)

    conv2 = tf.layers.conv2d(inputs=pool1,
                             filters=64,
                             kernel_size=[5, 5],
                             padding='same',
                             activation=tf.nn.relu)

    pool2 = tf.layers.max_pooling2d(inputs=conv2,
                                    pool_size=[2, 2],
                                    strides=2)

    pool_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])

    dense = tf.layers.dense(inputs=pool_flat,
                            units=1024,
                            activation=tf.nn.relu)

    dropout = tf.layers.dropout(inputs=dense,
                                rate=0.4,
                                training=mode == tf.estimator.ModeKeys.TRAIN)

    logits = tf.layers.dense(inputs=dropout,
                             units=10,
                             activation=None)

    predictions = {
        'class_ids': tf.argmax(logits, 1),
        'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
    }
    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode,
                                          predictions=predictions)

    loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)

    if mode == tf.estimator.ModeKeys.EVAL:
        eval_metric_ops = {
            'accuracy': tf.metrics.accuracy(labels=labels,
                                            predictions=tf.argmax(logits, 1))
        }
        return tf.estimator.EstimatorSpec(mode,
                                          loss=loss,
                                          eval_metric_ops=eval_metric_ops)

    # train
    assert mode == tf.estimator.ModeKeys.TRAIN
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
    train_op = optimizer.minimize(loss=loss,
                                  global_step=tf.train.get_global_step())
    return tf.estimator.EstimatorSpec(mode,
                                      loss=loss,
                                      train_op=train_op)


def dataset_input_fn(filenames):
    """

    :param filenames: tfrecord file's path
    :return:
    """
    # filenames = ['train.tfrecords', 'test.tfrecords']
    dataset = tf.data.TFRecordDataset(filenames)

    def _parse(record):
        features = {"image": tf.FixedLenFeature((), tf.string, default_value=""),
                    "label": tf.FixedLenFeature((), tf.int64, default_value=0)}
        parsed = tf.parse_single_example(record, features)

        image = tf.image.decode_jpeg(parsed["image"])
        image = tf.cast(image, tf.float32)
        # image = tf.image.convert_image_dtype(image, tf.float32)
        image = tf.reshape(image, [28, 28, 1])
        # image = tf.cast(image, tf.float32)
        # image = tf.decode_raw(features['image'], tf.float64)
        label = tf.cast(parsed['label'], tf.int32)
        return {'image': image}, label

    dataset = dataset.map(_parse)
    dataset = dataset.shuffle(buffer_size=10000)
    dataset = dataset.batch(100)
    dataset = dataset.repeat(1)

    iterator = dataset.make_one_shot_iterator()
    features, labels = iterator.get_next()
    # features = tf.cast(features, tf.float32)
    return features, labels


def main(argv):
    """

    :param argv:
    :return:
    """
    args = parser.parse_args(argv[1:])
    train_path = ['train.tfrecords']
    test_path = ['test.tfrecords']

    print("\ndata has been loaded as 'train_x' and 'train_y'\n")

    classifier = tf.estimator.Estimator(model_fn=cnn_model,
                                        model_dir=args.model_dir)

    classifier.train(
        input_fn=lambda: dataset_input_fn(train_path),
        steps=50)

    print("\ntraining process is done\n")


if __name__ == '__main__':
    tf.app.run(main)

2 个答案:

答案 0 :(得分:1)

错误似乎是在您的某些示例中,没有实际图像。

基本上,当您调用SelectedItem时,image = tf.image.decode_jpeg(parsed["image"])是一个空张量。

答案 1 :(得分:0)

至少一个例子不是图像。

您可以在将图像输入到神经网络之前检查数据类型。

我正在使用imghdr库:

代码:

import imghdr
import os

l_FileNames = os.listdir("images_path")

for image in l_FileNames:
    if not imghdr.what(image) == "png":
        l_FileNames.remove(image)