在Keras

时间:2018-06-11 15:32:49

标签: python keras layer

我试验了一种基于https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py在Keras中实现的变分自动编码器。

在从Keras中的非顺序计算图中获取层的输出方面存在问题,例如在Keras中实现的变分自动编码器。

编码器如下:

inputs = Input(shape=(input_shape,), name='encoder_input')
x = Dense(intermediate_dim, activation='relu')(inputs)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')

解码器如下:

latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
x = Dense(intermediate_dim, activation='relu')(latent_inputs)
outputs = Dense(original_dim, activation='sigmoid', name='decoder_outpu')(x)
# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')

变分自动编码器(VAE)模型如下:

# instantiate VAE model
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs, name='vae_mlp')

为简单起见,我刚刚做了以下事情:

vae.compile(optimizer='adam', loss='binary_crossentropy')
vae.fit(x=x_train, y=x_train,epochs=epochs, batch_size=batch_size, validation_data=(x_test, x_test))

我可以得到解码器的输出,例如,如下:

decoder.summary()
intermediate_layer_outputs = [layer.output for layer in decoder.layers]
intermediate_layer_model = Mod el(inputs=decoder.get_input_at(0),outputs=intermediate_layer_outputs)
decoder_input = np.random.normal(0,10,size=(3,2))
intermediate_output = intermediate_layer_model.predict(x=decoder_input)
intermediate_output

图层的输出是:

[array([[  2.74899244,   3.58746958],
        [ -3.37609863,   3.22382379],
        [  6.70482016,  13.96735668]], dtype=float32),
 array([[ 0.        ,  0.64544445,  0.45420599, ...,  0.        ,
          0.72464317,  1.18030334],
        [ 1.79546463,  0.        ,  1.65797734, ...,  1.93947244,
          0.        ,  0.15902017],
        [ 0.        ,  1.51342845,  1.34246469, ...,  0.        ,
          2.08303833,  3.98163915]], dtype=float32),
 array([[  4.30661340e-11,   4.72995476e-10,   1.68229250e-10, ...,
           4.98356023e-10,   6.42546918e-11,   2.90629031e-11],
        [  9.11776947e-13,   4.95661184e-13,   4.81620494e-13, ...,
           2.19281014e-13,   1.79197141e-12,   6.77271390e-13],
        [  1.75418052e-27,   1.18550066e-24,   1.44973726e-25, ...,
           2.22284772e-24,   5.12405949e-27,   7.81602653e-28]], dtype=float32)]

VAE摘要如下:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
encoder_input (InputLayer)   (None, 784)               0         
_________________________________________________________________
encoder (Model)              [(None, 2), (None, 2), (N 403972    
_________________________________________________________________
decoder (Model)              (None, 784)               403728    
=================================================================

从上面的摘要中我可以得到中间层的输出,即编码器(模型)层,如下所示:

intermediate_layer_outputs = [layer.get_output_at(0) for layer in vae.layers]
intermediate_layer_model = Model(inputs=vae.get_input_at(0), outputs=intermediate_layer_outputs[1])
intermediate_output_1 = intermediate_layer_model.predict(x=x_train[:1])
print('Layer 1 contents:')
display(intermediate_output_1)

,输出结果为:

Layer 1 contents:
[array([[ 0.22815476, -0.59258038]], dtype=float32),
 array([[-4.97396278, -4.52285957]], dtype=float32),
 array([[ 0.13466805, -0.67670715]], dtype=float32)]

以下是我的问题: 我想得到最后一层的输出,其中输入来自编码器的第三个​​输出。但是,我收到了错误。这是我尝试过的一个例子:

intermediate_layer_outputs = [layer.get_output_at(0) for layer in vae.layers]
intermediate_layer_model = Model(inputs=encoder(Input(shape=(input_shape,)))[2], outputs=intermediate_layer_outputs[2])
intermediate_layer_outputs

错误是

/opt/conda/lib/python3.5/site-packages/keras/engine/topology.py in __init__(self, inputs, outputs, name)
1608             # It's supposed to be an input layer, so only one node
1609             # and one tensor output.
-> 1610             assert node_index == 0
1611             assert tensor_index == 0
1612             self.input_layers.append(layer)
AssertionError:

如果我用

替换中间线
intermediate_layer_model = Model(inputs=vae.layers[1].get_output_at(0)[2], outputs=intermediate_layer_outputs[2])

我收到以下错误

TypeError: Input layers to a `Model` must be `InputLayer` objects. Received inputs: Tensor("z/add:0", shape=(?, 2), dtype=float32). Input 0 (0-based) originates from layer type `Lambda`.

如果我尝试以下

intermediate_layer_model = Model(inputs=Input(shape=(latent_dim,)), outputs=intermediate_layer_outputs[2])

我收到以下错误

RuntimeError: Graph disconnected: cannot obtain value for tensor Tensor("z_sampling:0", shape=(?, 2), dtype=float32) at layer "z_sampling". The following previous layers were accessed without issue: []

我的感觉是它与计算图的非顺序性有关。我真的很感谢这方面的帮助!

非常感谢!

0 个答案:

没有答案