输出尺寸的Keras误差

时间:2017-09-25 11:52:24

标签: python deep-learning keras

我有一个时间序列(2个变量),每个时间序列有大约80000个观测值(X),每个观测值对应一个类(Y)。我使用移动窗口方法将时间序列分割为几个间隔(每个间隔的长度为30)。然后我对Y进行了热编码,使其成为分类变量。

然后我使用以下代码

创建批量大小为64的批次

来自sklearn.preprocessing import OneHotEncoder

def one_hot_encoder(y):

    onehot_encoder = OneHotEncoder(sparse=False)
    y = y.reshape(len(y), 1)
    onehot_encoder = onehot_encoder.fit_transform(y)
    return onehot_encoder

def data_generator(x, y, shuffle=False, batch_size=64):

    # create order
    while True:
        index = np.arange(len(y))
        if shuffle == True:
            np.random.shuffle(index)
            x = x[index]
            y = y[index]

        # generate batches
        imax = int(len(index)/batch_size)
        for i in range(imax):
            yield x[i*batch_size: (i+1)*batch_size], y[i*batch_size: (i+1)*batch_size]


def get_batches(x, y):

    x = np.array(x)
    y = np.array(y)

    return data_generator(x, one_hot_encoder(y))

对于每批 print(next(batch)[0] .shape)是(64,30,2) - 30个观察值,2个变量 print(下一个(批次)1。形状)是(64,3) - 每个观察对应一个热编码类

然后我使用以下代码创建Model:

def create_model():
    model = Sequential()
    model.add(BatchNormalization(axis=1, input_shape=(30, 2)))
    model.add(Conv1D(16, 5, activation='relu'))
    model.add(BatchNormalization(axis=1))
    model.add(MaxPooling1D(2))
    model.add(Conv1D(16, 5, activation='relu'))
    model.add(BatchNormalization(axis=1))
    model.add(MaxPooling1D(3))
    model.add(Conv1D(32, 3, activation='relu'))
    model.add(Dense(100, activation='relu'))
    model.add(BatchNormalization())
    model.add(Dropout(0.5))
    model.add(Dense(3, activation='softmax'))

    return model

model = create_model()
model.compile(RMSprop(lr=0.01), loss='categorical_crossentropy', metrics=['accuracy'])

该模型的摘要如下:

model summary

但是当我使用fit_generator训练模型时,我收到以下错误消息。我真的很困惑我的输出尺寸是否不正确?或者我的代码中是否有任何错误。

感谢。

model.fit_generator(batches, steps_per_epoch=30, nb_epoch=5, validation_data=None, validation_steps=None)
ValueError                                Traceback (most recent call last)
<ipython-input-29-c7ba2e8eddfd> in <module>()
----> 1 model.fit_generator(batches, steps_per_epoch=10, nb_epoch=5, validation_data=None, validation_steps=None)

D:\Programs\Anaconda3\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
     85                 warnings.warn('Update your `' + object_name +
     86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 87             return func(*args, **kwargs)
     88         wrapper._original_function = func
     89         return wrapper

D:\Programs\Anaconda3\lib\site-packages\keras\models.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, initial_epoch)
   1119                                         workers=workers,
   1120                                         use_multiprocessing=use_multiprocessing,
-> 1121                                         initial_epoch=initial_epoch)
   1122 
   1123     @interfaces.legacy_generator_methods_support

D:\Programs\Anaconda3\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
     85                 warnings.warn('Update your `' + object_name +
     86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 87             return func(*args, **kwargs)
     88         wrapper._original_function = func
     89         return wrapper

D:\Programs\Anaconda3\lib\site-packages\keras\engine\training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   2040                     outs = self.train_on_batch(x, y,
   2041                                                sample_weight=sample_weight,
-> 2042                                                class_weight=class_weight)
   2043 
   2044                     if not isinstance(outs, list):

D:\Programs\Anaconda3\lib\site-packages\keras\engine\training.py in train_on_batch(self, x, y, sample_weight, class_weight)
   1754             sample_weight=sample_weight,
   1755             class_weight=class_weight,
-> 1756             check_batch_axis=True)
   1757         if self.uses_learning_phase and not isinstance(K.learning_phase(), int):
   1758             ins = x + y + sample_weights + [1.]

D:\Programs\Anaconda3\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
   1380                                     output_shapes,
   1381                                     check_batch_axis=False,
-> 1382                                     exception_prefix='target')
   1383         sample_weights = _standardize_sample_weights(sample_weight,
   1384                                                      self._feed_output_names)

D:\Programs\Anaconda3\lib\site-packages\keras\engine\training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    130                                  ' to have ' + str(len(shapes[i])) +
    131                                  ' dimensions, but got array with shape ' +
--> 132                                  str(array.shape))
    133             for j, (dim, ref_dim) in enumerate(zip(array.shape, shapes[i])):
    134                 if not j and not check_batch_axis:

ValueError: Error when checking target: expected dense_2 to have 3 dimensions, but got array with shape (64, 3)

1 个答案:

答案 0 :(得分:1)

当您使用input_shape=(30,2)时,您将使用3个维度定义输入:(batchSize, 30, 2)

这没关系,但是它会在你的模型到达密集层之前保持不变。

密集图层不会减少尺寸数量,它们会输出(batchSize, 30, denseUnits)

一种解决方案是使用展平图层,以减少到(batchSize,30*someValue)。然后密集将开始输出(batchSize,units)这将为您提供与您的2D类匹配的2D输出。

在密集层之前:

model.add(Flatten())