我正在尝试按照this blog中给出的示例来构建自动编码器模型。
input_img = Input(shape=(784,))
encoded = Dense(128, activation='relu')(input_img)
encoded = Dense(64, activation='relu')(encoded)
encoded = Dense(32, activation='relu')(encoded)
decoded = Dense(64, activation='relu')(encoded)
decoded = Dense(128, activation='relu')(decoded)
decoded = Dense(784, activation='sigmoid')(decoded)
# this model maps an input to its reconstruction
autoencoder = Model(input=input_img, output=decoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(input=encoded_input, output=decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
我所做的修改是decoder = Model(input=encoded_input, output=decoded)
,在原帖中写为decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))
。以前的版本适用于单个隐藏图层。这就是我做出上述修改的原因。但是,编译上述模型会出现以下错误消息。任何建议都非常感谢。
Traceback (most recent call last):
File "train.py", line 37, in <module>
decoder = Model(input=encoded_input, output=decoded)
File "tfw/lib/python3.4/site-packages/Keras-1.0.3-py3.4.egg/keras/engine/topology.py", line 1713, in __init__
str(layers_with_complete_input))
Exception: Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(?, 784), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: []
答案 0 :(得分:0)
我遇到了同样的问题,并设法制定了一个混乱但有效的解决方案。将您定义解码器的行更改为:
decoder = Model(input=encoded_input, output=autoencoder.layers[6](autoencoder.layers[5](autoencoder.layers[4](encoded_input))))
您看到的错误表明存在断开连接的图表。在这种情况下,定义为encoded_input
的输入张量被直接馈送到最终输出张量,定义为最终解码层(具有784维度的密集层)。中间张量(具有64和128维的密集层)被跳过。我的解决方案将这些图层嵌套,以便每个图层都作为下一个图层的输入,而最里面的张量则以encoded_input
作为输入。