Keras ValueError:输入0与层conv_lst_m2d_16不兼容:预期ndim = 5,找到ndim = 4

时间:2018-07-26 11:33:34

标签: python keras deep-learning conv-neural-network lstm

我正在尝试将图像序列分为2类。每个序列有5帧。我已经将ConvLSTM2D用作第一层,并且遇到了以上错误。 input_shape参数为input_shape = (timesteps, rows, columns, channels)

我生成的数据具有以下格式:

self.data = np.random.random((self.number_of_samples, 
                                  self.timesteps,
                                  self.rows,
                                  self.columns,
                                  self.channels)) 

,第一层的实现如下所示:

model = Sequential()

# time distributed is used - working frame by frame
model.add(ConvLSTM2D(filters=10,
                     input_shape=input_shape,
                     kernel_size=(3, 3),
                     activation='relu',
                     data_format="channels_last"))

有人可以帮我吗?

编辑:这是我的玩具代码:

import numpy as np
from keras.layers import Dense, Dropout, LSTM
from keras.layers import Conv2D, Flatten, ConvLSTM2D
from keras.models import Sequential
from keras.layers.wrappers import TimeDistributed
import time


class Classifier():
    """Classifier model to classify image sequences"""

    def __init__(self, number_of_samples, timesteps, rows, columns, channels, epochs, batch_size):
        self.number_of_samples = number_of_samples
        self.rows = rows
        self.columns = columns
        self.timesteps = timesteps
        self.channels = channels
        self.model = None
        self.data = []
        self.labels = []
        self.epochs = epochs
        self.batch_size = batch_size
        self.X_train = []
        self.X_test = []
        self.y_train = []
        self.y_test = []

    def build_model(self, input_shape, output_label_size):
        """Builds the classification model

        Keyword arguments:
            input_shape -- shape of the image array
            output_label_size -- 1
        """
        # initialize a sequential model
        model = Sequential()

        # time distributed is used - working frame by frame
        model.add(ConvLSTM2D(filters=10,
                             input_shape=input_shape,
                             kernel_size=(3, 3),
                             activation='relu',
                             data_format="channels_last"))
        print("output shape 1:{}".format(model.output_shape))
        print("correct till here")

        model.add(Dropout(0.2))
        model.add(ConvLSTM2D(filters=5,
                             kernel_size=(3, 3),
                             activation='relu'))
        print("correct till here")

        model.add(Dropout(0.2))
        model.add(Flatten())
        # print("output shape 2:{}".format(model.output_shape))
        model.add(LSTM(10))
        print("correct till here")
        # print("output shape 3:{}".format(model.output_shape))
        model.add(Dropout(0.2))
        model.add(LSTM(5))
        model.add(Dropout(0.2))
        # print("output shape 4:{}".format(model.output_shape))
        model.add(Dense(output_label_size,
                        kernel_initializer='uniform',
                        bias_initializer='zeros',
                        activation='sigmoid'))
        model.compile(optimizer='adam', loss='binary_crossentropy')
        print("correct till here")
        # model.summary()

        self.model = model

        print("[INFO] Classifier model generated")

    def split_data(self, data, labels):
        """Returns training and test set after splitting

        Keyword arguments:
            data -- image data
            labels -- 0 or 1
        """

        print("[INFO] split the data into training and testing sets")
        train_test_split = 0.9

        # split the data into train and test sets
        split_index = int(train_test_split * self.number_of_samples)
        # shuffled_indices = np.random.permutation(self.number_of_samples)
        indices = np.arange(self.number_of_samples)
        train_indices = indices[0:split_index]
        test_indices = indices[split_index:]

        X_train = data[train_indices, :, :]
        X_test = data[test_indices, :, :]
        y_train = labels[train_indices]
        y_test = labels[test_indices]

        print('Input shape: ', input_shape)
        print('X_train shape: ', X_train.shape)
        print('X_train[0] shape: ', X_train[0].shape)
        print('X_train[0][0] shape: ', X_train[0][0].shape)
        # print('y_train shape: ', y_train.shape)
        # print('X_test shape: ', X_test.shape)
        # print('y_test shape: ', y_test.shape)

        return X_train, X_test, y_train, y_test

    def load_training_data(self):
        """Load the training data for building the classification model."""

        self.data = np.random.random((self.number_of_samples,
                                      self.timesteps,
                                      self.rows,
                                      self.columns,
                                      self.channels))
        print("shape 1", type(self.data))
        print("shape 2", type(self.data[0]))
        print("shape 3", type(self.data[0][0]))

        # self.labels = np.zeros(self.number_of_samples)
        self.labels = np.ones(self.number_of_samples)

        X_train, X_test, y_train, y_test = self.split_data(self.data, self.labels)

        self.X_train = X_train
        self.X_test = X_test
        self.y_train = y_train
        self.y_test = y_test

        print("loading the training data done")

    def train_model(self):
        """Train the model

        Keyword arguments:
            epochs -- number of training iterations
            batch_size -- number of samples per batch
        """

        self.model.fit(x=self.X_train,
                       y=self.y_train,
                       batch_size=self.batch_size,
                       epochs=self.epochs,
                       verbose=1,
                       validation_data=(self.X_test, self.y_test))

        score = self.model.evaluate(self.X_test, self.y_test,
                                    verbose=1, batch_size=self.batch_size)

        prediction = self.model.predict(self.X_test,
                                        batch_size=self.batch_size,
                                        verbose=1)
        print("Loss:{}".format(score))
        print("Prediction:{}".format(prediction))


if __name__ == "__main__":
    start = time.time()
    number_of_samples = 12
    # number_of_test_samples = 2000
    timesteps = 5
    rows = 14
    columns = 14
    channels = 3
    output_label_size = 1
    epochs = 1
    batch_size = 1
    input_shape = (timesteps, rows, columns, channels)
    # input_shape = (batch_size, timesteps, rows, columns, channels)

    classifier_model = Classifier(number_of_samples,
                                  timesteps,
                                  rows,
                                  columns,
                                  channels,
                                  epochs,
                                  batch_size)

    classifier_model.load_training_data()
    classifier_model.build_model(input_shape, output_label_size)
    classifier_model.train_model()
    end = time.time()

    print("total time:{}".format(end - start))

1 个答案:

答案 0 :(得分:3)

有几种方法可以指定input shape。从文档中:

  

input_shape参数传递给第一层。这是一个形状元组(整数或None项的元组,其中None表示可能期望任何正整数)。在input_shape中,不包括批次尺寸

因此,正确的输入形状是:

input_shape = (timesteps, rows, columns, channels)

修复此错误后,您将遇到下一个错误(与input_shape相关的):

  

ValueError:输入0与层conv_lst_m2d_2不兼容:预期ndim = 5,发现ndim = 4

当您尝试添加第二个ConvLSTM2D层时,会发生此错误。发生这种情况是因为第一ConvLSTM2D层的输出是形状为(samples, output_row, output_col, filters)的4D张量。您可能需要设置return_sequences=True,在这种情况下,输出是形状为(samples, time, output_row, output_col, filters)的5D张量。

修复此错误后,您将在以下几行中遇到新的错误:

model.add(Flatten())
model.add(LSTM(10))

Flatten层之前有LSTM层是没有意义的。这将永远无法使用,因为LSTM需要一个形状为(samples, time, input_dim)的3D输入张量。

总而言之,我强烈建议您仔细阅读Keras文档,尤其是LSTMConvLSTM2D层的文档。了解这些层如何有效利用它们也很重要。