我需要访问张量中的每个值,然后使用第一个张量中提取的值生成一个新的张量。我在这篇文章中收到了一个建议: how do I solve this error : AttributeError: 'NoneType' object has no attribute '_inbound_nodes'? 1
,建议的代码是:
rep=Kr.layers.Lambda(lambda x:Kr.backend.tile(x,[1, 28, 28, 1]))
a_1 = Kr.layers.Lambda(lambda x: x[:, 1, 1, :])(wtm)
a=rep(Kr.layers.Reshape([1,1,1])(a_1))
但是当我在测试阶段将测试wtm馈入网络,然后检查每一层的输出时,我发现它始终产生零,而wtm包含0和1。现在,我需要您对此问题进行解释。假设
wtm=
1 1 0 0
0 1 0 1
1 0 1 0
1 1 1 0
由于上述代码,我期望a_1
将为1,而a
将为形状为(1,28,28,1)且值为1的张量,但这是具有值的张量0.您能告诉我什么问题吗?我做错了什么吗?
这是完整的代码:
wt_random=np.random.randint(2, size=(49999,4,4))
w_expand=wt_random.astype(np.float32)
wv_random=np.random.randint(2, size=(9999,4,4))
wv_expand=wv_random.astype(np.float32)
x,y,z=w_expand.shape
w_expand=w_expand.reshape((x,y,z,1))
x,y,z=wv_expand.shape
wv_expand=wv_expand.reshape((x,y,z,1))
#-----------------building w test---------------------------------------------
w_test = np.random.randint(2,size=(1,4,4))
w_test=w_test.astype(np.float32)
w_test=w_test.reshape((1,4,4,1))
#-----------------------encoder------------------------------------------------
#------------------------------------------------------------------------------
wtm=Input((4,4,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e',dilation_rate=(2,2))(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e',dilation_rate=(2,2))(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e',dilation_rate=(2,2))(conv2)
BN=BatchNormalization()(conv3)
encoded = Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)
#-----------------------adding w---------------------------------------
rep=Kr.layers.Lambda(lambda x:Kr.backend.tile(x,[1, 28, 28, 1]), name='replayer')
a_1 = Kr.layers.Lambda(lambda x: x[:, 1, 1, :])(wtm)
a=rep(Kr.layers.Reshape([1,1,1])(a_1))
encoded_merged=wpad([encoded,a])
#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------
deconv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1d',dilation_rate=(2,2))(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2d',dilation_rate=(2,2))(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='relu',padding='same', name='convl3d',dilation_rate=(2,2))(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='relu',padding='same', name='convl4d',dilation_rate=(2,2))(deconv3)
BNd=BatchNormalization()(deconv4)
decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd)
model=Model(inputs=[image,wtm],outputs=decoded)
decoded_noise = GaussianNoise(0.5)(decoded)
#----------------------w extraction------------------------------------
convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
BNed=BatchNormalization()(convw6)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)
w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])
训练后,我保存了模型,然后用这个模型加载了它:
modl=load_model('Desktop/model.h5')
layer_name = 'replayer'
intermediate_layer_model = Model(inputs=modl.input,outputs=modl.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict([x_test[8000:8001],w_test])
fig = plt.figure(figsize=(20, 20))
rows = 8
columns = 8
first = intermediate_output
for i in range(1, columns*rows +1):
img = intermediate_output[0,:,:,i-1]
fig.add_subplot(rows, columns, i)
plt.imshow(img, interpolation='nearest',cmap='gray')
plt.axis('off')
plt.show()
但它显示0我更改了索引,但输出是相同的。你能告诉我为什么会发生吗?代码工作错误还是我的错误? 谢谢。