将大小调整图层添加到keras顺序模型

时间:2017-01-27 22:17:38

标签: keras keras-layer

如何将调整大小图层添加到

model = Sequential()

使用

model.add(...)

要将图像从形状(160,320,3)调整为(224,224,3)?

6 个答案:

答案 0 :(得分:9)

我认为你应该考虑使用tensorflow的resize_images图层。

https://www.tensorflow.org/api_docs/python/tf/image/resize_images

看起来keras不包括这个,也许是因为theano中不存在该功能。我写了一个自定义keras层,它做了同样的事情。这是一个快速的黑客攻击,所以在你的情况下它可能不会很好。

import keras
import keras.backend as K
from keras.utils import conv_utils
from keras.engine import InputSpec
from keras.engine import Layer
from tensorflow import image as tfi

class ResizeImages(Layer):
    """Resize Images to a specified size

    # Arguments
        output_size: Size of output layer width and height
        data_format: A string,
            one of `channels_last` (default) or `channels_first`.
            The ordering of the dimensions in the inputs.
            `channels_last` corresponds to inputs with shape
            `(batch, height, width, channels)` while `channels_first`
            corresponds to inputs with shape
            `(batch, channels, height, width)`.
            It defaults to the `image_data_format` value found in your
            Keras config file at `~/.keras/keras.json`.
            If you never set it, then it will be "channels_last".

    # Input shape
        - If `data_format='channels_last'`:
            4D tensor with shape:
            `(batch_size, rows, cols, channels)`
        - If `data_format='channels_first'`:
            4D tensor with shape:
            `(batch_size, channels, rows, cols)`

    # Output shape
        - If `data_format='channels_last'`:
            4D tensor with shape:
            `(batch_size, pooled_rows, pooled_cols, channels)`
        - If `data_format='channels_first'`:
            4D tensor with shape:
            `(batch_size, channels, pooled_rows, pooled_cols)`
    """
    def __init__(self, output_dim=(1, 1), data_format=None, **kwargs):
        super(ResizeImages, self).__init__(**kwargs)
        data_format = conv_utils.normalize_data_format(data_format)
        self.output_dim = conv_utils.normalize_tuple(output_dim, 2, 'output_dim')
        self.data_format = conv_utils.normalize_data_format(data_format)
        self.input_spec = InputSpec(ndim=4)

    def build(self, input_shape):
        self.input_spec = [InputSpec(shape=input_shape)]

    def compute_output_shape(self, input_shape):
        if self.data_format == 'channels_first':
            return (input_shape[0], input_shape[1], self.output_dim[0], self.output_dim[1])
        elif self.data_format == 'channels_last':
            return (input_shape[0], self.output_dim[0], self.output_dim[1], input_shape[3])

    def _resize_fun(self, inputs, data_format):
        try:
            assert keras.backend.backend() == 'tensorflow'
            assert self.data_format == 'channels_last'
        except AssertionError:
            print "Only tensorflow backend is supported for the resize layer and accordingly 'channels_last' ordering"
        output = tfi.resize_images(inputs, self.output_dim)
        return output

    def call(self, inputs):
        output = self._resize_fun(inputs=inputs, data_format=self.data_format)
        return output

    def get_config(self):
        config = {'output_dim': self.output_dim,
                  'padding': self.padding,
                  'data_format': self.data_format}
        base_config = super(ResizeImages, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))

答案 1 :(得分:2)

通常您会使用Reshape图层:

model.add(Reshape((224,224,3), input_shape=(160,320,3))

但由于您的目标维度不允许保留输入维度(224*224 != 160*320)中的所有数据,因此无法正常工作。如果元素数量没有变化,则只能使用Reshape

如果您在丢失图像中的某些数据时表现良好,可以指定自己的有损重塑:

model.add(Reshape(-1,3), input_shape=(160,320,3))
model.add(Lambda(lambda x: x[:50176])) # throw away some, so that #data = 224^2
model.add(Reshape(224,224,3))

也就是说,这些变换通常是在将数据应用到模型之前完成的,因为如果在每个训练步骤中完成,这实际上是浪费了计算时间。

答案 2 :(得分:2)

可接受的答案使用Reshape层,其作用类似于NumPy's reshape,可用于将4x4矩阵整形为2x8矩阵,但会导致图像丢失位置信息:< / p>

0 0 0 0
1 1 1 1    ->    0 0 0 0 1 1 1 1
2 2 2 2          2 2 2 2 3 3 3 3
3 3 3 3

相反,应使用Tensorflows image_resize对图像数据进行缩放/“调整大小”。 但是要当心正确的usage和错误! 如related question所示,它可以用于lambda层:

model.add( keras.layers.Lambda( 
    lambda image: tf.image.resize_images( 
        image, 
        (224, 224), 
        method = tf.image.ResizeMethod.BICUBIC,
        align_corners = True, # possibly important
        preserve_aspect_ratio = True
    )
))

在您的情况下,由于您具有160x320的图像,因此还必须决定是否保留宽高比。如果要使用预先训练的网络,则应该使用与训练网络相同的调整大小。

答案 3 :(得分:0)

@KeithWM答案的修改,添加 output_scale ,例如 output_scale = 2 表示输出是输入形状的2倍:)

class ResizeImages(Layer):
    """Resize Images to a specified size
    https://stackoverflow.com/questions/41903928/add-a-resizing-layer-to-a-keras-sequential-model

    # Arguments
        output_dim: Size of output layer width and height
        output_scale: scale compared with input
        data_format: A string,
            one of `channels_last` (default) or `channels_first`.
            The ordering of the dimensions in the inputs.
            `channels_last` corresponds to inputs with shape
            `(batch, height, width, channels)` while `channels_first`
            corresponds to inputs with shape
            `(batch, channels, height, width)`.
            It defaults to the `image_data_format` value found in your
            Keras config file at `~/.keras/keras.json`.
            If you never set it, then it will be "channels_last".

    # Input shape
        - If `data_format='channels_last'`:
            4D tensor with shape:
            `(batch_size, rows, cols, channels)`
        - If `data_format='channels_first'`:
            4D tensor with shape:
            `(batch_size, channels, rows, cols)`

    # Output shape
        - If `data_format='channels_last'`:
            4D tensor with shape:
            `(batch_size, pooled_rows, pooled_cols, channels)`
        - If `data_format='channels_first'`:
            4D tensor with shape:
            `(batch_size, channels, pooled_rows, pooled_cols)`
    """

    def __init__(self, output_dim=(1, 1), output_scale=None, data_format=None, **kwargs):
        super(ResizeImages, self).__init__(**kwargs)
        data_format = normalize_data_format(data_format)  # does not have
        self.naive_output_dim = conv_utils.normalize_tuple(output_dim,
                                                           2, 'output_dim')
        self.naive_output_scale = output_scale
        self.data_format = normalize_data_format(data_format)
        self.input_spec = InputSpec(ndim=4)

    def build(self, input_shape):
        self.input_spec = [InputSpec(shape=input_shape)]
        if self.naive_output_scale is not None:
            if self.data_format == 'channels_first':
                self.output_dim = (self.naive_output_scale * input_shape[2],
                                   self.naive_output_scale * input_shape[3])
            elif self.data_format == 'channels_last':
                self.output_dim = (self.naive_output_scale * input_shape[1],
                                   self.naive_output_scale * input_shape[2])
        else:
            self.output_dim = self.naive_output_dim

    def compute_output_shape(self, input_shape):
        if self.data_format == 'channels_first':
            return (input_shape[0], input_shape[1], self.output_dim[0], self.output_dim[1])
        elif self.data_format == 'channels_last':
            return (input_shape[0], self.output_dim[0], self.output_dim[1], input_shape[3])

    def _resize_fun(self, inputs, data_format):
        try:
            assert keras.backend.backend() == 'tensorflow'
            assert self.data_format == 'channels_last'
        except AssertionError:
            print("Only tensorflow backend is supported for the resize layer and accordingly 'channels_last' ordering")
        output = tf.image.resize_images(inputs, self.output_dim)
        return output

    def call(self, inputs):
        output = self._resize_fun(inputs=inputs, data_format=self.data_format)
        return output

    def get_config(self):
        config = {'output_dim': self.output_dim,
                  'padding': self.padding,
                  'data_format': self.data_format}
        base_config = super(ResizeImages, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))

答案 4 :(得分:0)

我想我应该发布一个更新的答案,因为接受的答案是错误的,而且最近的 Keras 版本中有一些重大更新。

要添加一个调整大小的图层,根据documentation

tf.keras.layers.experimental.preprocessing.Resizing(height, width, interpolation="bilinear", crop_to_aspect_ratio=False, **kwargs)

对你来说,应该是:

from tensorflow.keras.layers.experimental.preprocessing import Resizing

model = Sequential()
model.add(Resizing(224,224))

答案 5 :(得分:-1)

要将给定输入图像的大小调整为目标大小(在本例中为224x224x3):

在传统的Keras中使用Lambda层:

{
  salary = 40 * 15.0;
  + (hours-40) * 25.0;;    // this line contains just an expression
                           // that is evaluated but not used, hence the warning
}

[Ref:https://www.tensorflow.org/api_docs/python/tf/keras/backend/resize_images]