步骤> 1的ZeroPadding Dynamic,并在Keras中访问尺寸为None的张量的实际形状

时间:2018-10-12 23:19:30

标签: python tensorflow keras keras-layer keras-2

我正在尝试实现动态零填充,以在经过具有跨度的卷积层后保持恒定张量的第二维> 1,输入张量具有以下形状(batch_size,time_step,50),我需要卷积层不改变时间步长。我尝试使用'same'填充,但是当stride> 1时不起作用,因此我为ZeroPadding创建了一个自定义层,它适用于形状为(None,100,50),(None,120, 50)(无,60,50),但不适用于类型为(无,无,50)的动态形状,出现以下错误:

Traceback (most recent call last):
  File "keras-dinamic-padding-for-stride.py", line 120, in <module>
    model.add (ZeroPadding1D (dinamic_padding_stride = 2))
  File "/home/edresson/anaconda3/lib/python3.6/site-packages/keras/engine/sequential.py", line 181, in add
    output_tensor = layer (self.outputs [0])
  File "/home/edresson/anaconda3/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in __call__
    output = self.call (inputs, ** kwargs)
  File "keras-dinamic-padding-for-stride.py", line 67, in call
    padding = int (inputs.shape [1] * self.dinamic_padding_stride)
TypeError: __int__ returned non-int (type NoneType)

我已将自定义类添加到imdb示例中,以使其更容易重现错误。将model.add(Embedding(max_features,embedding_dims,input_length = None))更改为model.add(Embedding(max_features,embedding_dims,input_length = 400))和动态填充将起作用,但是它需要适用于None类型的尺寸。代码:

from __future__ import print_function

from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalMaxPooling1D,Lambda
from keras.datasets import imdb


from keras.engine.topology import Layer,InputSpec
from keras.utils import conv_utils
import keras.backend as K

class ZeroPadding1D(Layer):
    """Zero-padding layer for 1D input (e.g. temporal sequence).

    # Arguments
        padding: int, or tuple of int (length 2), or dictionary.
            - If int:
            How many zeros to add at the beginning and end of
            the padding dimension (axis 1).
            - If tuple of int (length 2):
            How many zeros to add at the beginning and at the end of
            the padding dimension (`(left_pad, right_pad)`).
         dinamic_padding_stride: int
             - if used it totally ignores the padding parameter
             - is used to maintain the size of an input when
               passed through a convolutional layer, adding the
               fill with zeros dynamically. Note: the posterior
               convolutional layer should use padding = 'same'

    # Input shape
        3D tensor with shape `(batch, axis_to_pad, features)`

    # Output shape
        3D tensor with shape `(batch, padded_axis, features)`
    """

    def __init__(self, padding=1,dinamic_padding_stride=None, **kwargs):
        super(ZeroPadding1D, self).__init__(**kwargs)
        self.padding = conv_utils.normalize_tuple(padding, 2, 'padding')
        self.input_spec = InputSpec(ndim=3)
        self.dinamic_padding_stride = dinamic_padding_stride


    def compute_output_shape(self, input_shape):
        if input_shape[1] is not None:
            if self.dinamic_padding_stride is not None :
                padding =  input_shape[1] * self.dinamic_padding_stride - input_shape[1] 
                self.padding = (int(padding/2),int(padding/2))
            length = input_shape[1] + self.padding[0] + self.padding[1]
        else:
            length = None
        return (input_shape[0],
                length,
                input_shape[2])

    def call(self, inputs):
        if self.dinamic_padding_stride is not None:
            padding = int(inputs.shape[1] * self.dinamic_padding_stride)
            self.padding = (int(padding/2),int(padding/2)) 
        return K.temporal_padding(inputs, padding=self.padding)

    def get_config(self):
        config = {'padding': self.padding}
        base_config = super(ZeroPadding1D, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))







# set parameters:
max_features = 5000
maxlen = 400
batch_size = 32
embedding_dims = 50
filters = 250
kernel_size = 3
hidden_dims = 250
epochs = 2

print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')

print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)

print('Build model...')
model = Sequential()

# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
                    embedding_dims,
                    input_length=None))
model.add(Dropout(0.2))

# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model.add(Conv1D(filters,
                 kernel_size,
                 padding='same',
                 activation='relu',
                 strides=1))
model.add(ZeroPadding1D(dinamic_padding_stride=2))
model.add(Conv1D(filters,
                 kernel_size,
                 padding='same',
                 activation='relu',
                 strides=2))
# we use max pooling:
model.add(GlobalMaxPooling1D())

# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))

# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy']) 
model.summary()
model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
validation_data=(x_test, y_test))

我搜索并发现必须使用K.shape(输入)才能在运行时获得正确的形状,而不是无,但是我无法使其与Keras一起使用,有人可以帮助我吗?

如果您有另一种解决方案来解决零动态填充问题,非常欢迎。

预先感谢您的关注。

0 个答案:

没有答案