我学到了一个有两个输入的网络。它可以用作自动编码器。在网络的第一部分,输入被馈送到网络,经过一些处理并从高斯噪声层传递之后,使用了网络的第二部分。在学习过程中,所有网络都是一起学习的,但是为了进行测试,我需要将其分为两个部分。第一部分获得两个输入,第二部分获得一个输入,该输入是第一个网络的输出。因此,当我要为每个零件制作两个模型时,它表示第二个零件没有输入。你能告诉我我该怎么做吗?是否可以为第二部分制作相同的书,但可以从第一网络的权重中学习?我将很快放上代码。我在喀拉拉邦工作。谢谢
我的代码是:
wt_random=np.random.randint(2, size=(49999,4,4))
w_expand=wt_random.astype(np.float32)
wv_random=np.random.randint(2, size=(9999,4,4))
wv_expand=wv_random.astype(np.float32)
x,y,z=w_expand.shape
w_expand=w_expand.reshape((x,y,z,1))
x,y,z=wv_expand.shape
wv_expand=wv_expand.reshape((x,y,z,1))
#-----------------building w test---------------------------------------------
w_test = np.random.randint(2,size=(1,4,4))
w_test=w_test.astype(np.float32)
w_test=w_test.reshape((1,4,4,1))
#-----------------------encoder------------------------------------------------
#------------------------------------------------------------------------------
wtm=Input((4,4,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
BN=BatchNormalization()(conv3)
encoded = Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)
wpad=Kr.layers.Lambda(lambda xy: xy[0] + Kr.backend.spatial_2d_padding(xy[1], padding=((0, 24), (0, 24))))
encoded_merged=wpad([encoded,wtm])
deconv1 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl1d')(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv4)
decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd)
model=Model(inputs=[image,wtm],outputs=decoded)
decoded_noise = GaussianNoise(0.5)(decoded)
convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
convw7 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl7w',dilation_rate=(2,2))(convw6)#4
convw8 = Conv2D(64, (5,5), activation='relu', padding='same',name='conl8w',dilation_rate=(2,2))(convw7)#4
convw9 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl9w',dilation_rate=(2,2))(convw8)#4
convw10 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl10w',dilation_rate=(2,2))(convw9)#4
BNed=BatchNormalization()(convw10)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)
model2=Model(inputs=decoded_noise,outputs=pred_w)
w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])
w_extraction.summary()
错误:
回溯(最近通话最近一次):
“ 55”行中的文件“” model2 = Model(inputs = decoded_noise,outputs = pred_w)
文件 “ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ keras \ legacy \ interfaces.py”, 第91行,在包装器中 返回func(* args,** kwargs)
文件 “ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ keras \ engine \ network.py”, 第93行,初始化 self._init_graph_network(* args,** kwargs)
文件 “ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ keras \ engine \ network.py”, _init_graph_network中的第231行 self.inputs,self.outputs)
文件 “ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ keras \ engine \ network.py”, _map_graph_network中的第1443行 str(layers_with_complete_input))
ValueError:图形已断开:无法获得张量的值 Tensor(“ input_14:0”,shape =(?, 28,28,1),dtype = float32) “ input_14”。可以顺利访问以下先前的层: []
答案 0 :(得分:0)
理想情况下,您应该首先创建单独的模型。
net1 = createNet1()
net2 = createNet2()
net2OutFrom1 = net2(net1.output)
entireModel = Model(net1.input, net2OutFrom1)
然后您训练entireModel
,然后可以自动使用net1
和net2
,而不会遇到任何麻烦。
您需要创建一个新输入:
net2Input = Input(input_shape)
然后将其穿过第二个网的所有层。
out = originalNet.layers[firstLayerOfNet2](net2Input)
out = originalNet.layers[secondLayerOfNet2](out)
out = originalNet.layers[thirdLayerOfNet2](out)
....
然后分别创建第二个网络:
net2 = Model(net2Input, out)
仍然可以轻松创建第一个网络:
net1 = Model(originalNet.input, originalNet.layers[lastLayerOfNet1].output)