为什么Keras(或枕头)将我的.jpg文件作为MPO读取?

时间:2019-06-06 15:18:53

标签: python tensorflow keras python-imaging-library

我收到一个警告,说我在运行keras模型循环之前从未见过。警告状态:

C:\ProgramData\Anaconda3\envs\tensorflowenvironment\lib\site-packages\PIL\JpegImagePlugin.py:795: UserWarning: Image appears to be a malformed MPO file, it will be interpreted as a base JPEG file
  warnings.warn("Image appears to be a malformed MPO file, it will be "

这是标准的卷积神经网络。有没有人看到此错误并知道如何解决?我使用Image Data Generator时会发生这种情况。

我在互联网上唯一无法找到适合自己的情况的地方是:https://github.com/python-pillow/Pillow/issues/1138

下面是模型代码:

     #create list of images

    epochs = 30
    img_size = 125

    # this is the augmentation configuration we will use for training
    train_datagen = ImageDataGenerator(
        rescale=1. / 255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)

    # this is the augmentation configuration we will use for testing:
    # only rescaling
    test_datagen = ImageDataGenerator(rescale=1. / 255)

    train_generator = train_datagen.flow_from_directory(
        train_dir,
        target_size=(img_size, img_size),
        batch_size=62764,
        shuffle = True,
        #color_mode = 'grayscale',
        class_mode='binary')

    validation_generator = test_datagen.flow_from_directory(
        val_dir,
        target_size=(img_size, img_size),
        batch_size=39227,
        shuffle = True,
        #color_mode = 'grayscale',
        class_mode='binary')

    test_generator = test_datagen.flow_from_directory(
        test_dir,
        target_size=(img_size, img_size),
        batch_size=39227,
        shuffle = True,
        #color_mode = 'grayscale',
        class_mode='binary')


dense_layers = [2,3]
layer_sizes = [64, 128, 256]
con_layers = [2,3,4]
con_layer_sizes = [32, 64, 128]

    for dense_layer in dense_layers:
        for layer_size in layer_sizes:
            for conv_layer in con_layers:
                for con_layer_size in con_layer_sizes:

                    img_size = 125

                    if K.image_data_format() == 'channels_first':
                        input_shape = (3, img_size, img_size)
                    else:
                        input_shape = (img_size, img_size, 3)

                    batch_size = 62764

                    #K.input_shape = (img_size, img_size)

                    NAME = "{}-conv-{}-con_layer_sizes-{}-nodes-{}-dense-{}".format(conv_layer, con_layer_size, layer_size, dense_layer, int(time.time()))
                    print(NAME)

                    #call backs
                    tensorboard = TensorBoard(log_dir= 'logs/{}'.format(NAME))

                    mcp = ModelCheckpoint(filepath='C:\\Users\\jordan.howell\\models\\'+NAME+'_model.h5',monitor="val_loss"
                                          , save_best_only=True, save_weights_only=False)

                    reduce_learning_rate = ReduceLROnPlateau(monitor='val_loss', factor=0.3,patience=2,cooldown=2
                                                             , min_lr=0.00001, verbose=1)



                    #start model build
                    model = Sequential()
                    model.add(Conv2D(con_layer_size, (3, 3), activation="relu", padding = 'same', input_shape= input_shape))
                    model.add(MaxPooling2D(pool_size = (2, 2)))
                    model.add(BatchNormalization())
                    model.add(Dropout(0.15))

                    for l in range(conv_layer):
                        #Convolution
                        model.add(Conv2D(con_layer_size, (3, 3), activation="relu", padding = 'same'))
                        model.add(MaxPooling2D(pool_size = (2, 2)))
                        model.add(BatchNormalization())
                        model.add(Dropout(0.15))                


                    #model.add(GlobalAveragePooling2D())
                    # Flatten the layer
                    model.add(Flatten())

                    for l in range(dense_layer):
                        model.add(Dense(layer_size, activation = 'relu'))

                    model.add(Dense(activation = 'sigmoid', units = 1))

                    model.compile(loss ='binary_crossentropy', optimizer = 'adam'
                                  , metrics=[km.binary_precision(), km.binary_recall()])

                    model.fit_generator(train_generator, steps_per_epoch =  3138160// batch_size
                                        ,validation_data=validation_generator, validation_steps=2
                                        , epochs = 30, callbacks = [reduce_learning_rate, tensorboard, mcp])

0 个答案:

没有答案