Keras CONV1D:检查目标时出错:预期density_output具有2维,但数组的形状为(35206,50,1)

时间:2019-09-29 11:08:18

标签: python keras deep-learning conv-neural-network autoencoder

我有这个麻烦: 检查目标时出错:预期density_output具有2维,但数组的形状为(35206,50,1) 有了这段代码,一个具有CONV1D和两个输出的自动编码器,但是麻烦的是重构输出(dense_output):

X_train, X_test, y_train, y_test = train_test_split(X, other_output, test_size=0.3, random_state=42)

TAM_VECTOR = X_train.shape[1]

input_tweet = Input(shape=(TAM_VECTOR,X_train.shape[2]))

encoded = Conv1D(64, kernel_size=1, activation='relu')(input_tweet)
encoded = Conv1D(32, kernel_size=1, activation='relu')(encoded)

decoded = Conv1D(32, kernel_size=1, activation='relu')(encoded)
decoded = Conv1D(64, kernel_size=1, activation='relu')(decoded)
decoded = Flatten()(decoded)
decoded = Dense(TAM_VECTOR, activation='relu', name='dense_output')(decoded)

encoded = Flatten()(encoded)
second_output = Dense(1, activation='linear', name='second_output')(encoded)

autoencoder = Model(inputs=input_tweet, outputs=[decoded, second_output])

autoencoder.compile(optimizer="adam",
                    loss={'dense_output': 'mse', 'second_output': 'mse'},
                    loss_weights={'dense_output': 0.001, 'second_output': 0.999},
                    metrics=["mae"])

autoencoder.fit([X_train], [X_train, y_train], epochs=10, batch_size=32)

输入(X)的形状为(50000,50),我将其重塑为:

X = np.reshape(X, (X.shape[0], X.shape[1], -1))

(50000,50,1)

other_output是

other_output.shape

(50000,1)

这里是模型摘要

Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_27 (InputLayer)           (None, 50, 1)        0                                            
__________________________________________________________________________________________________
conv1d_105 (Conv1D)             (None, 50, 64)       128         input_27[0][0]                   
__________________________________________________________________________________________________
conv1d_106 (Conv1D)             (None, 50, 32)       2080        conv1d_105[0][0]                 
__________________________________________________________________________________________________
conv1d_107 (Conv1D)             (None, 50, 32)       1056        conv1d_106[0][0]                 
__________________________________________________________________________________________________
conv1d_108 (Conv1D)             (None, 50, 64)       2112        conv1d_107[0][0]                 
__________________________________________________________________________________________________
flatten_42 (Flatten)            (None, 3200)         0           conv1d_108[0][0]                 
__________________________________________________________________________________________________
flatten_43 (Flatten)            (None, 1600)         0           conv1d_106[0][0]                 
__________________________________________________________________________________________________
dense_output (Dense)            (None, 50)           160050      flatten_42[0][0]                 
__________________________________________________________________________________________________
second_output (Dense)           (None, 1)            1601        flatten_43[0][0]                 
==================================================================================================
Total params: 167,027
Trainable params: 167,027
Non-trainable params: 0

2 个答案:

答案 0 :(得分:0)

您正在训练卷积自动编码器,因此用于解码的输出层不能是密集层(它没有通道)。相反,您最后需要一个Conv1D层。

这是有效的代码:

tf.reset_default_graph()

X = np.random.rand(50000, 50, 1); y = np.linspace(1, 50000, 50000)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

TAM_VECTOR = X_train.shape[1]

input_tweet = Input(shape=(TAM_VECTOR, X_train.shape[2]))
encoded = Conv1D(64, kernel_size=1, activation='relu')(input_tweet)
encoded = Conv1D(32, kernel_size=1, activation='relu')(encoded)

decoded = Conv1D(32, kernel_size=1, activation='relu')(encoded)
decoded = Conv1D(64, kernel_size=1, activation='relu')(decoded)
decoded = Conv1D(1, kernel_size=1, activation='sigmoid', name='dense_output')(decoded)

encoded = Flatten()(encoded)
second_output = Dense(1, activation='linear', name='second_output')(encoded)

autoencoder = Model(inputs=input_tweet, outputs=[decoded, second_output])
autoencoder.compile(optimizer='adam', loss='mse')

plot_model(autoencoder, show_shapes=True)
autoencoder.fit(X_train, {'dense_output': X_train, 'second_output': y_train}, epochs=10, batch_size=32)

答案 1 :(得分:0)

现在,通过一次转换更改最后一个密度,代码如下:

input_tweet = Input(shape=(TAM_VECTOR,X_train.shape[2]))

encoded = Conv1D(64, kernel_size=1, activation='relu')(input_tweet)
encoded = Conv1D(32, kernel_size=1, activation='relu')(encoded)

decoded = Conv1D(32, kernel_size=1, activation='relu')(encoded)
decoded = Conv1D(64, kernel_size=1, activation='relu')(decoded)
decoded = Conv1D(TAM_VECTOR, kernel_size=1, activation='relu', name='decoded_output')(decoded)

encoded = Flatten()(encoded)
second_output = Dense(1, activation='linear', name='second_output')(encoded)

但是出现此错误:

ValueError: Error when checking target: expected tweet_output to have shape (50, 50) but got array with shape (50, 1)

在其他测试中也遇到了这个错误,当我放置一个不包含Flatten的密集层时