损坏的JPEG数据:标记0xe2之前的2个无关字节

时间:2018-10-08 06:20:05

标签: python tensorflow jpeg opencv3.0

当我尝试训练我的图像集时,出现此错误。我试图在线解决该问题,但没有得到太多帮助来解决该问题。

  

D:\ Python \ python.exe C:/Nutsbolts/train.py

     

D:\ Python \ lib \ site-packages \ h5py__init __。py:36:FutureWarning:

     

issubdtype的第二个参数从float转换为

     

np.floating已过时。将来会被视为

     

np.float64 == np.dtype(float).type。从._conv导入

     

register_converters作为_register_converters training_set即将阅读

     

训练图像   现在要读取坚果文件(索引:0)

现在要读取螺丝文件(索引:1)

  

完成读取输入数据。   现在会   打印它的一小段   培训集中的文件数:1688   验证集中的文件:562 2018-10-08 11:29:47.598119:I   tensorflow / core / platform / cpu_feature_guard.cc:141]您的CPU支持   TensorFlow二进制文件未编译使用的指令:AVX2   警告:tensorflow:来自C:/Nutsbolts/train.py:44:调用argmax(来自   带有维度的tensorflow.python.ops.math_ops)已弃用   在将来的版本中删除。更新说明:使用   axis参数代替警告:tensorflow:从   C:/Nutsbolts/train.py:152:softmax_cross_entropy_with_logits(来自   tensorflow.python.ops.nn_ops)已弃用,并将在   未来版本。更新说明:

     

TensorFlow的未来主要版本将允许梯度流入   默认情况下,在反向传播程序上输入的标签。

     

请参见tf.nn.softmax_cross_entropy_with_logits_v2

     

用于顺序JPEG的SOS参数无效     损坏的JPEG数据:标记0xe2 libpng之前有2个多余的字节警告:iCCP:配置文件   'icc':'CMYK':无效的ICC配置文件颜色空间

     

以退出代码0结束的过程


#Dataset program
import cv2
import os
import glob
import numpy as np
from sklearn.utils import shuffle

定义加载训练图像的功能

def load_train(train_path, image_size, classes):
images = []
labels = []
img_names = []
cls = []
print('Going to read training images')
for fields in classes:
    index = classes.index(fields)
    print('Now going to read {} files (Index: {})'.format(fields, index))
    path = os.path.join(train_path, fields, '*g')
    files = glob.glob(path)
    for fl in files:
        # Read the image
        image = cv2.imread(fl)

        # Resize the image
        image = cv2.resize(image, (image_size, image_size),0,0, cv2.INTER_LINEAR)
        # Convert the image to float
        image = image.astype(np.float32)
        image = np.multiply(image, 1.0 / 255.0)
        images.append(image)
        label = np.zeros(len(classes))
        label[index] = 1.0
        labels.append(label)
        flbase = os.path.basename(fl)
        img_names.append(flbase)
        cls.append(fields)
images = np.array(images)
labels = np.array(labels)
img_names = np.array(img_names)
cls = np.array(cls)

return images, labels, img_names, cls
 # Define a class DataSet
class DataSet(object):
    def __init__(self, images, labels, img_names, cls):
        self._num_examples = images.shape[0]
        self._images = images
        self._labels = labels
        self._img_names = img_names
    self._cls = cls
    self._epochs_done = 0
    self._index_in_epoch = 0
# Define various properties of the images
@property
def images(self):
    return self._images
@property
def labels(self):
    return self._labels
@property
def img_names(self):
    return self._img_names
@property
def cls(self):
    return self._cls
@property
def num_examples(self):
    return self._num_examples
@property
def epochs_done(self):
    return self._epochs_done
def next_batch(self, batch_size):
# Return the next `batch of examples from this data set.
    start = self._index_in_epoch
    self._index_in_epoch += batch_size
    if self._index_in_epoch > self._num_examples:
        # After each epoch we update this
        self._epochs_done += 1
        start = 0
        self._index_in_epoch = batch_size
        assert batch_size <= self._num_examples
    end = self._index_in_epoch

    return self._images[start:end], self._labels[start:end], self._img_names[start:end], self._cls[start:end]

阅读训练集

def read_train_sets(train_path, image_size, classes, validation_size):
   class DataSets(object):
       pass
   data_sets = DataSets()
   images, labels, img_names, cls = load_train(train_path, image_size, 
    classes)
   images, labels, img_names, cls = shuffle(images, labels, img_names, cls) #paa add

if isinstance(validation_size, float):
    validation_size = int(validation_size * images.shape[0])

validation_images = images[:validation_size]
validation_labels = labels[:validation_size]
validation_img_names = img_names[:validation_size]
validation_cls = cls[:validation_size]

train_images = images[validation_size:]
train_labels = labels[validation_size:]
train_img_names = img_names[validation_size:]
train_cls = cls[validation_size:]

data_sets.train = DataSet(train_images, train_labels, train_img_names, train_cls)
data_sets.valid = DataSet(validation_images, validation_labels, validation_img_names, validation_cls)

return data_sets

0 个答案:

没有答案