并行化张量流对象检测模型

时间:2018-07-16 15:03:44

标签: tensorflow parallel-processing object-detection

我正在尝试在多个图像上并行化张量流对象检测模型。按顺序进行对象检测可以正常工作,但是当我尝试对其进行并行化时,它将冻结我的aws会话。这是我创建的代码的一个片段

我确保我的tf会话仅首先在1个CPU上运行。我不希望它并行化时争夺资源

config = tf.ConfigProto(intra_op_parallelism_threads=1, 
inter_op_parallelism_threads=1, \
                    allow_soft_placement=True, device_count = {'CPU': 
1})

这是我的检测代码

def detect_image(image_path):
    global scores,boxes,classes
    #with tf.device('/cpu:0'):

    with detection_graph.as_default():
        with tf.Session(config=config,graph=detection_graph) as sess:
        # Definite input and output Tensors for detection_graph
            image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
        # Each box represents a part of the image where a particular object was detected.
        detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
        # Each score represent how level of confidence for each of the objects.
        # Score is shown on the result image, together with the class label.
        detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
        detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
        num_detections = detection_graph.get_tensor_by_name('num_detections:0')

        image = Image.open(image_path)

        # the array based representation of the image will be used later in order to prepare the
        # result image with boxes and labels on it.
        image_np = load_image_into_numpy_array(image)
        # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
        image_np_expanded = np.expand_dims(image_np, axis=0)
        # print(image_np_expanded.shape)
        # Actual detection.
        (boxes, scores, classes, num) = sess.run([detection_boxes, detection_scores, detection_classes, num_detections],feed_dict={image_tensor: image_np_expanded})



    ## Text : Now we use the boxes to create a new cropped image
    x_min = int(round(boxes[0][0][1],2)*640) #512
    x_max = int(round(boxes[0][0][3],2)*640) #512
    y_min = int(round(boxes[0][0][0],2)*640) #384
    y_max = int(round(boxes[0][0][2],2)*640) #384

    x_min2 = int(round(boxes[0][0][1]-0.05,2)*640) #0.09
    x_max2 = int(round(boxes[0][0][3]+0.05,2)*640) #0.12
    y_min2 = int(round(boxes[0][0][0]-0.05,2)*640) #0.09
    y_max2 = int(round(boxes[0][0][2]+0.05,2)*640) #0.12

    imagetoc = image_np

    crop_img = imagetoc[y_min:y_max,x_min:x_max]
    crop_img2 = imagetoc[y_min2:y_max2,x_min2:x_max2]


    # Text : NOTE: its img[y: y + h, x: x + w] and *not* img[x: x + w, y: y + h]

    try:
        val = crop_img2
    except:
        val = crop_img

return val,scores

然后我尝试并行化它。

def read_img_str(df):

    image_path = df['URL_str']
    with contextlib.closing(urllib.urlopen(image_path)) as x:
        file = cStringIO.StringIO(x.read())
        try:
            val = cv2.resize(detect_image(file)/255.,(300,300))
        except:
            val = 'NO IMAGE'
    return val

num_partitions = 2 #number of partitions to split dataframe
num_cores = multiprocessing.cpu_count()/2 #number of cores on your machine

def parallelize_dataframe(df, func):
    df_split = np.array_split(df, num_partitions)
    pool = Pool(num_cores)
    df = pd.concat(pool.map(func, df_split))
    pool.close()
    pool.join()
    return df



def create_str(df):
    df['image_array_str'] = df.apply(read_img_str1,axis=1)
    return data

df_small1 = parallelize_dataframe(df_small1, create_str)

在这里挂起。您能为我提出解决方案吗?这是我现在建立模型的最大障碍,我无法在大量图像上运行它。

0 个答案:

没有答案