Tensorflow入门 - 将图像拆分为子图像

时间:2016-07-06 23:53:57

标签: python dataset tensorflow image-segmentation

这是我第一次使用卷积神经网络和Tensorflow。

我正在尝试实现能够从数字视网膜图像提取血管的卷积神经网络。我正在使用公开的Drive database(图片采用.tif格式)。

由于我的图像非常大,我的想法是将它们分成尺寸为28x28x1的子图像(“1”是绿色通道,是我唯一需要的图像)。为了创建训练集,我从每个图像中迭代地随机裁剪28x28批次,并在此集合上训练网络。

现在,我想在数据库中的一个大图像上测试我训练过的网络(也就是说,我想完整地应用网络)。由于我的网络是针对大小为28x28的子图像进行训练的,因此我们的想法是将眼睛分成“n”个子图像,传递给网络,重新组合它们并显示结果,如图1所示:

Fig1

我尝试使用以下功能: tf.extract_image_pathcestf.train.batch,但我想知道做到这一点的正确方法是什么。

以下是我的代码片段。我遇到的功能是split_image(image)

import numpy
import os
import random

from PIL import Image
import tensorflow as tf

BATCH_WIDTH = 28;
BATCH_HEIGHT = 28;

NUM_TRIALS = 10;

class Drive:
    def __init__(self,train):
        self.train = train

class Dataset:
    def __init__(self, inputs, labels):
        self.inputs = inputs
        self.labels = labels
        self.current_batch = 0

    def next_batch(self):
        batch = self.inputs[self.current_batch], self.labels[self.current_batch]
        self.current_batch = (self.current_batch + 1) % len(self.inputs)
        return batch


#counts the number of black pixel in the batch
def mostlyBlack(image):
    pixels = image.getdata()
    black_thresh = 50
    nblack = 0
    for pixel in pixels:
        if pixel < black_thresh:
            nblack += 1

    return nblack / float(len(pixels)) > 0.5

#crop the image starting from a random point
def cropImage(image, label):
    width  = image.size[0]
    height = image.size[1]
    x = random.randrange(0, width - BATCH_WIDTH)
    y = random.randrange(0, height - BATCH_HEIGHT)
    image = image.crop((x, y, x + BATCH_WIDTH, y + BATCH_HEIGHT)).split()[1]
    label = label.crop((x, y, x + BATCH_WIDTH, y + BATCH_HEIGHT)).split()[0]
    return image, label

def split_image(image):

    ksizes_ = [1, BATCH_WIDTH, BATCH_HEIGHT, 1]
    strides_ = [1, BATCH_WIDTH, BATCH_HEIGHT, 1]

    input = numpy.array(image.split()[1])
    #input = tf.reshape((input), [image.size[0], image.size[1]])

    #input = tf.train.batch([input],batch_size=1)
    split = tf.extract_image_patches(input, padding='VALID', ksizes=ksizes_, strides=strides_, rates=[1,28,28,1], name="asdk")

#creates NUM_TRIALS images from a dataset
def create_dataset(images_path, label_path):
    files = os.listdir(images_path)
    label_files = os.listdir(label_path)

    images = [];
    labels = [];
    t = 0
    while t < NUM_TRIALS:
        index = random.randrange(0, len(files))
        if files[index].endswith(".tif"):
            image_filename = images_path + files[index]
            label_filename = label_path  + label_files[index]
            image = Image.open(image_filename)
            label = Image.open(label_filename)
            image, label = cropImage(image, label)
            if not mostlyBlack(image):
                #images.append(tf.convert_to_tensor(numpy.array(image)))
                #labels.append(tf.convert_to_tensor(numpy.array(label)))
                images.append(numpy.array(image))
                labels.append(numpy.array(label))

                t+=1

    image = Image.open(images_path + files[1])
    split_image(image)

    train = Dataset(images, labels)
    return Drive(train)

2 个答案:

答案 0 :(得分:1)

您可以结合使用reshapetranspose来将图片剪切成图块:

def split_image(image3, tile_size):
    image_shape = tf.shape(image3)
    tile_rows = tf.reshape(image3, [image_shape[0], -1, tile_size[1], image_shape[2]])
    serial_tiles = tf.transpose(tile_rows, [1, 0, 2, 3])
    return tf.reshape(serial_tiles, [-1, tile_size[1], tile_size[0], image_shape[2]])

其中image3是3维张量(例如图像),tile_size是指定图块大小的一对值[H, W]。输出是形状为[B, H, W, C]的张量。在您的情况下,呼叫将是:

tiles = split_image(image, [28, 28])

导致形状为[B, 28, 28, 1]的张量。您还可以通过反向执行以下操作来重新组合切片中的原始图像:

def unsplit_image(tiles4, image_shape):
    tile_width = tf.shape(tiles4)[1]
    serialized_tiles = tf.reshape(tiles4, [-1, image_shape[0], tile_width, image_shape[2]])
    rowwise_tiles = tf.transpose(serialized_tiles, [1, 0, 2, 3])
    return tf.reshape(rowwise_tiles, [image_shape[0], image_shape[1], image_shape[2]]))

其中tiles4是4D张量的形状[B, H, W, C],而image_shape是原始图像的形状。在您的情况下,呼叫可以是:

image = unsplit_image(tiles, tf.shape(image))

请注意,这仅适用于图片大小可以被图块大小整除的情况。如果情况并非如此,您需要将图片填充到最接近的图块大小的倍数:

def pad_image_to_tile_multiple(image3, tile_size, padding="CONSTANT"):
    imagesize = tf.shape(image3)[0:2]
    padding_ = tf.to_int32(tf.ceil(imagesize / tile_size)) * tile_size - imagesize
    return tf.pad(image3, [[0, padding_[0]], [0, padding_[1]], [0, 0]], padding)

您可以这样称呼:

image = pad_image_to_tile_multiple(image, [28,28])

然后在从瓷砖重新组装图像后通过拼接移除paddig:

image = image[0:original_size[0], 0:original_size[1], :]

答案 1 :(得分:0)

一种简单的解决方案,可以将N张图像中的一批图像(-1,X,Y,3)裁剪为

crops = tf.reshape(tensor_images, (-1, N, tensor_images.shape[1]//N, N, tensor_images.shape[2]//N, tensor_images.shape[3]))
crops = tf.transpose(crops, [0, 1, 3, 2, 4, 5])

按以下方式检查解决方案:

def show_images(segs, x, y):
  fig, axs = plt.subplots(x, y, figsize=(x*2, y*2))
  for i in range(x):
    for j in range(y):
      axs[i, j].imshow(segs[i][j], cmap=plt.cm.binary, vmin=0, vmax=1)
  plt.show()
  plt.close()
tensor_images = tf.convert_to_tensor(image_batch, dtype=tf.float32)
crops = tf.reshape(tensor_images, (-1, 8, tensor_images.shape[1]//8, 8,
tensor_images.shape[2]//8, tensor_images.shape[3]))
crops = tf.transpose(crops, [0, 1, 3, 2, 4, 5])
show_images(crops.numpy()[0], 8, 8)