如何加快测试图像中滑动窗口对象检测的效果

时间:2016-12-05 21:09:09

标签: python deep-learning caffe conv-neural-network object-detection

问题:

我已经训练了卷积神经网络(CNN)来确定/检测给定图像块中是否存在感兴趣的对象。

现在给出一个大图像,我试图通过将我的CNN模型应用到图像中每个像素周围的补丁,以滑动窗口方式定位图像中所有出现的对象。然而,这非常缓慢。

我的测试图像的大小是(512 x 512)。而且,对于我的caffe网,测试批量大小为1024,色块大小为(65 x 65 x 1)。

我尝试在一批补丁(size = test_batch_size)上应用我的caffe网,而不是一次只应用一个补丁。即使这样,它也很慢。

以下是我目前的解决方案,速度很慢。除了对我的测试图像进​​行下采样以加快速度之外,我将不胜感激。

当前解决方案非常缓慢:

def detectObjects(net, input_file, output_file):

    # read input image
    inputImage = plt.imread(input_file)

    # get test_batch_size and patch_size used for cnn net
    test_batch_size = net.blobs['data'].data.shape[0]
    patch_size = net.blobs['data'].data.shape[2]

    # collect all patches    
    w = np.int(patch_size / 2)

    num_patches = (inputImage.shape[0] - patch_size) * \
                  (inputImage.shape[1] - patch_size)

    patches = np.zeros((patch_size, patch_size, num_patches))
    patch_indices = np.zeros((num_patches, 2), dtype='int64')

    count = 0

    for i in range(w + 1, inputImage.shape[0] - w):
        for j in range(w + 1, inputImage.shape[1] - w):

            # store patch center index
            patch_indices[count, :] = [i, j]

            # store patch
            patches[:, :, count] = \
                inputImage[(i - w):(i + w + 1), (j - w):(j + w + 1)]

            count += 1

    print "Extracted %s patches" % num_patches

    # Classify patches using cnn and write result to output image
    outputImage = np.zeros_like(inputImage)
    outputImageFlat = np.ravel(outputImage)

    pad_w = test_batch_size - num_patches % test_batch_size
    patches = np.pad(patches, ((0, 0), (0, 0), (0, pad_w)),
                     'constant')
    patch_indices = np.pad(patch_indices, ((0, pad_w), (0, 0)),
                           'constant')

    start_time = time.time()

    for i in range(0, num_patches, test_batch_size):

        # get current batch of patches
        cur_pind = patch_indices[i:i + test_batch_size, :]

        cur_patches = patches[:, :, i:i + test_batch_size]
        cur_patches = np.expand_dims(cur_patches, 0)
        cur_patches = np.rollaxis(cur_patches, 3)

        # apply cnn on current batch of images
        net.blobs['data'].data[...] = cur_patches

        output = net.forward()

        prob_obj = output['prob'][:, 1]

        if i + test_batch_size > num_patches:

            # remove padded part
            num_valid = num_patches - i
            prob_obj = prob_obj[0:num_valid]
            cur_pind = cur_pind[0:num_valid, :]

        # set output
        cur_pind_lin = np.ravel_multi_index((cur_pind[:, 0],
                                             cur_pind[:, 1]),
                                             outputImage.shape)

        outputImageFlat[cur_pind_lin] = prob_obj

    end_time = time.time()
    print 'Took %s seconds' % (end_time - start_time)

    # Save output
    skimage.io.imsave(output_file, outputImage * 255.0)

我希望用线条

    net.blobs['data'].data[...] = cur_patches
    output = net.forward()

caffe会使用GPU并行地对cur_patches中的所有补丁进行分类。不知道为什么它仍然很慢。

1 个答案:

答案 0 :(得分:1)

我认为您所寻找的内容在Casting a Classifier into a Fully Convolutional Network of the "net surgery" tutorial部分中有描述 这个解决方案基本上说的是,"InnerProduct"层可以转换为等效转换层,而不是转换层后跟"InnerProduct"层进行分类。 strong>完全卷积网络,可以处理任何大小的图像并根据输入大小输出预测 转向完全卷积式架构将大大减少您当前进行的重新计算的数量,并且应该显着加快您的流程。

加速的另一个可能方向是使用truncated SVD trick通过两个较低等级矩阵的乘积来近似高维"InnerProduct"层。