每次迭代时Tensorflow Inception批次分类较慢

时间:2017-04-22 20:53:05

标签: python image-processing tensorflow classification

我重新训练了Inception的最后一层,并使用tensorflow.com中的this tutorial对我自己的类别进行了重新训练。我是Tensorflow的初学者,我的目标是为工作中的项目分类30.000张图片。

在将最后一层重新训练成我自己的标签之后,我抓住了大约20张看不见的照片并将它们(完整的文件路径)添加到了pandas数据帧中。接下来,我将数据帧中的每个图片提供给图像分类器,并在分类后,将相应的最高预测标签和可靠性分数添加到同一行中的另外两列。

为了将图片提供给分类器,我使用了df.iterrows(),df.apply(函数)以及3个单独的硬编码文件路径(请参阅下面的代码,我留下了他们的注释)。然而,我发现无论我如何喂照片,每次迭代对照片进行分类都需要更长的时间。 Pic [0]以2.2秒的分类时间开始,但是通过Pic [19],这已经增加到23秒。想象一下,在pic 10.000,20.000等处需要多长时间。此外,在文件被分类的同时,cpu和内存使用量也会缓慢增加,但它们并没有显着增加。

请参阅下面的代码(大部分代码,保存大熊猫和分类激活部分,取自上面的tensorflow教程中提到的this示例)。

import os
import tensorflow as tf, sys
import pandas as pd
import gc
import numpy as np
import tensorflow as tf
import time
import psutil    


modelFullPath = '/Users/jaap/tf_files/retrained_graph.pb'
labelsFullPath = '/Users/jaap/tf_files/retrained_labels.txt'    

def create_graph():
    """Creates a graph from saved GraphDef file and returns a saver."""
    # Creates graph from saved graph_def.pb.
    with tf.gfile.FastGFile(modelFullPath, 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        _ = tf.import_graph_def(graph_def, name='')    


def run_inference_on_image(image):
    answer = None
    imagePath = image
    print imagePath
    if not tf.gfile.Exists(imagePath):
        tf.logging.fatal('File does not exist %s', imagePath)
        return answer    

    image_data = tf.gfile.FastGFile(imagePath, 'rb').read()    

    # Creates graph from saved GraphDef.
    create_graph()    

    with tf.Session() as sess:    

        softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
        predictions = sess.run(softmax_tensor,
                               {'DecodeJpeg/contents:0': image_data})
        predictions = np.squeeze(predictions)    

        top_k = predictions.argsort()[-5:][::-1]  # Getting top 5 predictions
        f = open(labelsFullPath, 'rb')
        lines = f.readlines()
        labels = [str(w).replace("\n", "") for w in lines]
        for node_id in top_k:
            human_string = labels[node_id]
            score = predictions[node_id]
            print('%s (score = %.5f)' % (human_string, score))
            return human_string, score    


werkmap = '/Users/jaap/tf_files/test/'
filelist = []
files_in_dir = os.listdir('/Users/jaap/tf_files/test/')
for f in files_in_dir:
    if f != '.DS_Store':
        filelist.append(werkmap+f)    

df = pd.DataFrame(filelist, index=None, columns=['Pics'])
df = df.drop_duplicates()
df['Class'] = ''
df['Reliability'] = ''    

print(df)    


#--------------------------------------------------------
for index, pic in df.iterrows():
    start = time.time()
    df['Class'][index] = run_inference_on_image(pic[0])
    stop = time.time()
    duration = stop - start
    print("duration = %s" % duration)
    print("cpu usage: %s" % psutil.cpu_percent())
    print("memory usage: %s " % psutil.virtual_memory())
    print("")

df['Class'] = df['Class'].astype(str)
df['Class'], df['Reliability'] = df['Class'].str.split(',', 1).str    

#-------------------------------------------------        

# df['Class'] = df['Pics'].apply(run_inference_on_image)
# df['Class'] = df['Class'].astype(str)
# df['Class'], df['Reliability'] = df['Class'].str.split(',', 1).str
# print(df)    

#--------------------------------------------------------------
# start = time.time()
# ja = run_inference_on_image('/Users/jaap/tf_files/test/12345_1.jpg')
# stop = time.time()
# duration = stop - start
# print("duration = %s" % duration)  

# start = time.time()
# ja = run_inference_on_image('/Users/jaap/tf_files/test/12345_2.jpg')
# stop = time.time()
# duration = stop - start
# print("duration = %s" % duration)    

# start = time.time()
# ja = run_inference_on_image('/Users/jaap/tf_files/test/12345_3.jpg')
# stop = time.time()
# duration = stop - start
# print("duration = %s" % duration)    

我感谢任何帮助!

1 个答案:

答案 0 :(得分:1)

似乎您正在为每个推理创建完整的图表。这会让它变慢。相反,您可以执行以下操作:

with tf.Graph().as_default():
  create_graph()
  with tf.Session() as sess:
    for index, pic in df.iterrows():
      start = time.time()
      df['Class'][index] = run_inference_on_image(pic[0], sess)
      stop = time.time()