“用户警告:可能损坏EXIF数据”,同时对图像

时间:2017-09-15 06:22:38

标签: python image tensorflow classification

以下是我的多图像分类代码。我收到了一个错误;我认为这是因为加载和其他方面的某些方面的不匹配。

错误消息从代码结束的地方开始。任何人都可以看到问题吗?

#importing necessary packages
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from PIL import Image
import tflearn
import tensorflow as tf


import tflearn
#for writing text files

import glob
import os
import random
#reading images from a text file
from tflearn.data_utils import image_preloader
import math


IMAGE_FOLDER = 'C:/Users/kdeepshi/Desktop/PyforE/Face-Detection/Train'
TRAIN_DATA = 'C:/Users/kdeepshi/Desktop/PyforE/Face-Detection/training_data.txt'
TEST_DATA = 'C:/Users/kdeepshi/Desktop/PyforE/Face-Detection/test_data.txt'
VALIDATION_DATA = 'C:/Users/kdeepshi/Desktop/PyforE/Face-Detection/validation_data.txt'
train_proportion=0.7
test_proportion=0.2
validation_proportion=0.1


#read the image directories
filenames_image = os.listdir(IMAGE_FOLDER)
#shuffling the data is important otherwise the model will be fed with a single class data for a long time and
#network will not learn properly
random.shuffle(filenames_image)


#total number of images
total=len(filenames_image)
##  *****training data********
fr = open(TRAIN_DATA, 'w')
train_files=filenames_image[0: int(train_proportion*total)]
for filename in train_files:
    if filename[0:4] == 'Mark':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 0\n')
    elif filename[0:5] == 'lucas':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 1\n')
    elif filename[0:3] == 'Ann':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 2\n')
    elif filename[0:5] == 'Henry':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 3\n')
    elif filename[0:5] == 'Hanna':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 4\n')
    elif filename[0:4] == 'Jack':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 5\n')
    elif filename[0:5] == 'Harry':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 6\n')
    elif filename[0:3] == 'Lui':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 7\n')
    elif filename[0:6] == 'Karlos':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 8\n')
    elif filename[0:4] == 'Guan':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 9\n')

fr.close()
##  *****testing data********
fr = open(TEST_DATA, 'w')
test_files=filenames_image[int(math.ceil(train_proportion*total)):int(math.ceil((train_proportion+test_proportion)*total))]
for filename in test_files:
    if filename[0:4] == 'Mark':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 0\n')
    elif filename[0:5] == 'lucas':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 1\n')
    elif filename[0:3] == 'Ann':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 2\n')
    elif filename[0:5] == 'Henry':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 3\n')
    elif filename[0:5] == 'Hanna':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 4\n')
    elif filename[0:4] == 'Jack':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 5\n')
    elif filename[0:5] == 'Harry':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 6\n')
    elif filename[0:3] == 'Lui':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 7\n')
    elif filename[0:6] == 'Karlos':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 8\n')
    elif filename[0:4] == 'Guan':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 9\n')
fr.close()

##  *****validation data********
fr = open(VALIDATION_DATA, 'w')
valid_files=filenames_image[int(math.ceil((train_proportion+test_proportion)*total)):total]
for filename in valid_files:
    if filename[0:4] == 'Mark':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 0\n')
    elif filename[0:5] == 'lucas':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 1\n')
    elif filename[0:3] == 'Ann':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 2\n')
    elif filename[0:5] == 'Henry':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 3\n')
    elif filename[0:5] == 'Hanna':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 4\n')
    elif filename[0:4] == 'Jack':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 5\n')
    elif filename[0:5] == 'Harry':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 6\n')
    elif filename[0:3] == 'Lui':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 7\n')
    elif filename[0:6] == 'Karlos':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 8\n')
    elif filename[0:4] == 'Guan':
        fr.write(IMAGE_FOLDER + '/'+ filename + ' 9\n')
fr.close()

#Importing data
X_train, Y_train = image_preloader(TRAIN_DATA, image_shape=(56,56),mode='file', categorical_labels=True,normalize=True)
X_test, Y_test = image_preloader(TEST_DATA, image_shape=(56,56),mode='file', categorical_labels=True,normalize=True)
X_val, Y_val = image_preloader(VALIDATION_DATA, image_shape=(56,56),mode='file', categorical_labels=True,normalize=True)



print ("Dataset")
print ("Number of training images {}".format(len(X_train)))
print ("Number of testing images {}".format(len(X_test)))
print ("Number of validation images {}".format(len(X_val)))
print ("Shape of an image {}" .format(X_train[1].shape))
print ("Shape of label:{} ,number of classes: {}".format(Y_train[1].shape,len(Y_train[1])))


#Sample Image
plt.imshow(X_train[1])
plt.axis('off')
plt.title('Sample image with label {}'.format(Y_train[1]))
plt.show()
print(type(X_test))


#input image
x=tf.placeholder(tf.float32,shape=[None,56,56,3] , name='input_image')
#input class


y_=tf.placeholder(tf.float32,shape=[None, 10] , name='input_class')



input_layer=x
print("Hiiiiiiii No error till this point")


#convolutional layer 1 --convolution+RELU activation
conv_layer1=tflearn.layers.conv.conv_2d(input_layer, nb_filter=64, filter_size=5, strides=[1,1,1,1],
                                        padding='same', activation='relu', regularizer="L2", name='conv_layer_1')

#2x2 max pooling layer
out_layer1=tflearn.layers.conv.max_pool_2d(conv_layer1, 10)


#second convolutional layer
conv_layer2=tflearn.layers.conv.conv_2d(out_layer1, nb_filter=128, filter_size=5, strides=[1,1,1,1],
                                        padding='same', activation='relu',  regularizer="L2", name='conv_layer_2')
out_layer2=tflearn.layers.conv.max_pool_2d(conv_layer2, 10)
# third convolutional layer
conv_layer3=tflearn.layers.conv.conv_2d(out_layer2, nb_filter=128, filter_size=5, strides=[1,1,1,1],
                                        padding='same', activation='relu',  regularizer="L2", name='conv_layer_2')
out_layer3=tflearn.layers.conv.max_pool_2d(conv_layer3, 10)

#fully connected layer1
fcl= tflearn.layers.core.fully_connected(out_layer3, 4096, activation='relu' , name='FCL-1')
fcl_dropout_1 = tflearn.layers.core.dropout(fcl, 0.8)
#fully connected layer2
fc2= tflearn.layers.core.fully_connected(fcl_dropout_1, 4096, activation='relu' , name='FCL-2')
fcl_dropout_2 = tflearn.layers.core.dropout(fc2, 0.8)
#softmax layer output
y_predicted = tflearn.layers.core.fully_connected(fcl_dropout_2, 10, activation='softmax', name='output')
#loss function
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_predicted+np.exp(-10)), reduction_indices=[1]))
#optimiser -
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
#calculating accuracy of our model
correct_prediction = tf.equal(tf.argmax(y_predicted,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# session parameters
sess = tf.InteractiveSession()
#initialising variables
init = tf.global_variables_initializer()
sess.run(init)
saver = tf.train.Saver()
save_path="C:/Users/kdeepshi/Desktop/PyforE/Face-Detection/mark2.ckpt"

g = tf.get_default_graph()



# every operations in our graph
[op.name for op in g.get_operations()]
epoch=5000

batch_size=20
previous_batch=0
for i in range(epoch):
    #batch wise training

    if previous_batch >= len(X_train) :
            previous_batch=0
    current_batch=previous_batch+batch_size
    x_input=X_train[previous_batch:current_batch]
    x_images=np.array(x_input)
    x_images=np.reshape(x_images,[batch_size,56,56,3])
    y_input=Y_train[previous_batch:current_batch]
    y_label=np.reshape(y_input,[batch_size,10])
    previous_batch=previous_batch+batch_size
    _,loss=sess.run([train_step, cross_entropy], feed_dict={x: x_images,y_: y_label})
    if i%500==0:
        n=50 #number of test samples
        X_test=np.array(X_test)
        x_test_images=np.reshape(X_test[0:n],[n,56,56,3])
        y_test_labels=np.reshape(Y_test[0:n],[n,10])
        Accuracy=sess.run(accuracy,feed_dict={x: x_test_images ,y_: y_test_labels})
        print("Iteration no :{} , Accuracy:{} , Loss : {}" .format(i,Accuracy,loss))
        saver.save(sess, save_path, global_step = i)
    elif i % 100 ==0:
        print("Iteration no :{} Loss : {}" .format(i,loss))
x_input=X_val
x_images=np.reshape(x_input,[len(X_val),56,56,3])
y_input=Y_val
y_label=np.reshape(y_input,[len(Y_val),10])

Accuracy_validation=sess.run(accuracy,feed_dict={x: x_images ,y_: y_label})
Accuracy_validation=round(Accuracy_validation*100,2)
print("Accuracy in the validation dataset: {} %".format(Accuracy_validation))


Test_FOLDER = 'C:/Users/kdeepshi/Desktop/PyforE/Face-Detection/Test'


filenames_image = os.listdir(Test_FOLDER)
total=len(filenames_image)
print(total)

test_files=filenames_image[0: int(total)]
for filename in test_files:

    marty=Image.open(Test_FOLDER+'/'+filename)
    marty_resize=marty.resize((56,56),Image.ANTIALIAS)
    marty_resize=np.array(marty_resize)
    marty_test=marty_resize/np.max(marty_resize).astype(float)
    marty_test=np.reshape(marty_test,[1,56,56,3])
    c=sess.run(y_predicted, feed_dict={x: marty_test})
    d= np.argmax(c)
#test your own images
#test_image=Image.open('/path to file')
#test_image= process_img(test_image)
#predicted_array= sess.run(y_predicted, feed_dict={x: test_image})
    #predicted_class= np.argmax(predicted_array)
    if d==0:

        print("This is Mark\n")
    elif d==1:

        print("This is lucas\n")
    elif d==2:
        print("This is Ann")
    elif d==3:

        print("This is Henrry\n")
    elif d==4:

        print("This is Hanna\n")
    elif d==5:
        print("This is Jack")

    elif d==6:

        print("This is Harry\n")
    elif d==7:
        print("This is Lui")
    elif d==8:

        print("This is Karlos\n")
    elif d==9:

        print("This is guan\n")

这是错误:

  

C:\用户\ kdeepshi \桌面\ PyforE \面部识别> multi.py

     

此计算机不支持curses(请安装/重新安装   诅咒以获得最佳体验)

     

数据集

     

训练图像数量22

     

测试图像数量6

     

验证图像数量3

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:692:   UserWarning:可能损坏EXIF数据。期待读取12个字节   但只有0.跳过标签270

     

“跳过代码%s”%(尺寸,len(数据),代码))

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:692:   UserWarning:可能损坏EXIF数据。期望读取6个字节   但只得到0.跳过标签271

     

“跳过代码%s”%(尺寸,len(数据),代码))

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:692:   UserWarning:可能损坏EXIF数据。期待读取8个字节   但只有0.跳过标签272“跳过标签%s”%(尺寸,   len(数据),标签))

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:692:   UserWarning:可能损坏EXIF数据。期望读取64个字节   但只得到0.跳过标签282“跳过标签%s”%(尺寸,   len(数据),标签))

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:692:   UserWarning:可能损坏EXIF数据。期望读取64个字节   但只得到0.跳过标签283“跳过标签%s”%(尺寸,   len(数据),标签))

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:692:   UserWarning:可能损坏EXIF数据。期待读取20个字节   但只有0.跳过标签306“跳过标签%s”%(尺寸,   len(数据),标签))

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:692:   UserWarning:可能损坏EXIF数据。期望读取24个字节   但只有0.跳过标签529“跳过标签%s”%(尺寸,   len(数据),标签))

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:692:   UserWarning:可能损坏EXIF数据。期望读取24个字节   但只有0.跳过标签532“跳过标签%s”%(尺寸,   len(数据),标签))

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:692:   UserWarning:可能损坏EXIF数据。期待读取40个字节   但只有0.跳过标签33432“跳过标签%s”%(尺寸,   len(数据),标签))

     

C:\ ProgramData \ Anaconda3 \ lib中\站点包\ PIL \ TiffImagePlugin.py:709:   UserWarning:损坏的EXIF数据。期望只读取2个字节   得到0. warnings.warn(str(msg))

     

图像的形状(56,56,3)

     

标签形状:(10,),班级数:10

     

1 个答案:

答案 0 :(得分:1)

从PIL模块中的图像读取EXIF数据时出现问题。我倾向于认为这是PIL中的错误,而不是图像损坏。由于深度学习不需要此数据,因此您只需清理文件即可。为此,请下载ExifTool并运行以下命令:

exiftool -r -all= -ext JPEG D:\datasets\ImageNet\train

-r:递归 -all:将所有EXIF数据设置为null -ext:文件扩展名

这在Windows和Linux上均可使用。