Keras CNN错误:预期序列具有3维,但数组的形状为(500,400)

时间:2018-11-25 06:03:01

标签: python machine-learning keras conv-neural-network

我收到此错误。我是ML的新手。

  

ValueError:检查输入时出错:预期序列具有3个维,但数组的形状为(500,400)

这些是我正在使用的以下代码。

print(X1_Train.shape)
print(X2_Train.shape)
print(y_train.shape)

====================================
Output (here I've 500 rows in each):
(500, 400)
(500, 1500)
(500,)

400 => timesteps (below)
1500 => n (below)
====================================


timesteps = 50 * 8
n = 50 * 30

def createClassifier():
    sequence = Input(shape=(timesteps, 1), name='Sequence')
    features = Input(shape=(n,), name='Features')

    conv = Sequential()
    conv.add(Conv1D(10, 5, activation='relu', input_shape=(timesteps, 1)))
    conv.add(Conv1D(10, 5, activation='relu'))
    conv.add(MaxPool1D(2))
    conv.add(Dropout(0.5))

    conv.add(Conv1D(5, 6, activation='relu'))
    conv.add(Conv1D(5, 6, activation='relu'))
    conv.add(MaxPool1D(2))
    conv.add(Dropout(0.5))
    conv.add(Flatten())
    part1 = conv(sequence)

    merged = concatenate([part1, features])

    final = Dense(512, activation='relu')(merged)
    final = Dropout(0.5)(final)
    final = Dense(num_class, activation='softmax')(final)

    model = Model(inputs=[sequence, features], outputs=[final])
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model

model = createClassifier()
# print(model.summary())
history = model.fit([X1_Train, X2_Train], y_train, epochs =5)

请问有见识吗? 预先感谢。

1 个答案:

答案 0 :(得分:2)

两件事-

Conv1D层期望输入为(batch_size, x, filters),在您的情况下为(500,400,1)
您需要重塑输入层的形状,添加另一个尺寸为1的轴。(这不会更改数据中的任何内容)。

您正尝试使用多个输入,顺序API并非最佳选择。我建议使用Functional API

编辑: 关于您的评论,不确定您做错了什么,但这是代码的工作版本(带有伪数据),并经过了调整:

import keras

import numpy as np



X1_Train = np.ones((500,400))
X2_Train = np.ones((500,1500))
y_train = np.ones((500))
print(X1_Train.shape)
print(X2_Train.shape)
print(y_train.shape)

num_class = 1


timesteps = 50 * 8
n = 50 * 30

def createClassifier():
    sequence = keras.layers.Input(shape=(timesteps, 1), name='Sequence')
    features = keras.layers.Input(shape=(n,), name='Features')

    conv = keras.Sequential()
    conv.add(keras.layers.Conv1D(10, 5, activation='relu', input_shape=(timesteps, 1)))
    conv.add(keras.layers.Conv1D(10, 5, activation='relu'))
    conv.add(keras.layers.MaxPool1D(2))
    conv.add(keras.layers.Dropout(0.5))

    conv.add(keras.layers.Conv1D(5, 6, activation='relu'))
    conv.add(keras.layers.Conv1D(5, 6, activation='relu'))
    conv.add(keras.layers.MaxPool1D(2))
    conv.add(keras.layers.Dropout(0.5))
    conv.add(keras.layers.Flatten())
    part1 = conv(sequence)

    merged = keras.layers.concatenate([part1, features])

    final = keras.layers.Dense(512, activation='relu')(merged)
    final = keras.layers.Dropout(0.5)(final)
    final = keras.layers.Dense(num_class, activation='softmax')(final)

    model = keras.Model(inputs=[sequence, features], outputs=[final])
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model

model = createClassifier()
# print(model.summary())
X1_Train = X1_Train.reshape((500,400,1))
history = model.fit([X1_Train, X2_Train], y_train, epochs =5)

输出:

Using TensorFlow backend.
(500, 400)
(500, 1500)
(500,)
Epoch 1/5
500/500 [==============================] - 1s 3ms/step - loss: 1.1921e-07 - acc: 1.0000
Epoch 2/5
500/500 [==============================] - 0s 160us/step - loss: 1.1921e-07 - acc: 1.0000
Epoch 3/5
500/500 [==============================] - 0s 166us/step - loss: 1.1921e-07 - acc: 1.0000
Epoch 4/5
500/500 [==============================] - 0s 154us/step - loss: 1.1921e-07 - acc: 1.0000
Epoch 5/5
500/500 [==============================] - 0s 157us/step - loss: 1.1921e-07 - acc: 1.0000