Keras卷积层的维数误差

时间:2018-12-29 19:22:28

标签: keras deep-learning conv-neural-network dimensions keras-layer

以下数据变量的维数如下:

print(x_train.shape)是(1750,784)

print(y_train.shape)是(1750,10)

print(x_test.shape)是(749,784)

print(y_test.shape)是(749,10)

orig_dims是784

inner_dims是10

代码在下面。我在'validation_data =(x_test,[y_test,x_test]))行得到的错误是InvalidArgumentError:不兼容的形状:[1750,784]与[1750,28,28,1]      [[节点:loss_18 / dense_400_loss / logistic_loss / mul = Mul [T = DT_FLOAT,_class = [“ loc:@train ... ad / Reshape”],_device =“ / job:localhost / replica:0 / task:0 / device:CPU:0“](loss_18 / dense_400_loss / Log,_arg_dense_400_target_0_2)]]。

x_test = x_test.reshape(749,28,28,1)
x_train = x_train.reshape(1750,28,28,1)

#Create first input and dense layer
input_layer = Input(shape=(28,28,1))   
x = Conv2D(64,(3,3),strides = (1,1),name='layer_conv1',padding='same', 
input_shape=(28, 28, 1))(input_layer)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D((2,2),name='maxPool1')(x)
x1 = Flatten()(x)
flatLayer1 = Dense(64,activation = 'relu',name='fc0')(x1)

encoded_layer = Dense(inner_dims, activation=activation_f)(flatLayer1)

#Create layers - 1 dense layer
for i in xrange(layers-1):
  encoded_layer = Dense(inner_dims, activation=activation_f)(encoded_layer)

#Initialize shared hidden state layer
hidden_state = Dense(orig_dims,activation=activation_f,name='h')

#Create latent layer to output
encoded = hidden_state(encoded_layer)

#Create latent layer for decoder
encoded_output = hidden_state(encoded_layer)

#Create decoder
decoded = Dense(inner_dims, activation=activation_f)(encoded_output)
for i in xrange(layers-1):
    decoded = Dense(inner_dims, activation=activation_f)(decoded)

output_layer = Dense(orig_dims, activation=activation_f)(decoded)
###output_layer = Dense(10, activation=activation_f)(decoded)

encoder = Model(input_layer, encoded)
encoder_2 = Model(input_layer,encoded_layer)

#Wrappers for keras
def custom_loss1(y_true,y_pred):
    bcro = losses.binary_crossentropy(y_true,encoded)
    return bcro


def custom_loss2(y_true,y_pred):
    recon_loss = losses.binary_crossentropy(y_true, y_pred)
    return recon_loss

    autoencoder = Model(input_layer, outputs=[encoded,output_layer])
    autoencoder.compile(optimizer='adadelta', loss=[custom_loss1, custom_loss2],loss_weights=[0.1, 1.])
    autoencoder.fit(x_train, [y_train,x_train],
                    batch_size=batch_size,
                    epochs=epochs,
                    shuffle=True,
                    validation_data=(x_test, [y_test,x_test]))

如何纠正?

0 个答案:

没有答案