我在conda环境中的GPU上运行Tensorflow 1.12。我有几个批处理规范层作为以这种方式定义的卷积块的一部分:
Conv=lambda NumFilter, Input, FilterSize=PARAMS['FilterSize'] : tf.layers.conv2d(Input, NumFilter, FilterSize, strides=1, activation=None, padding='SAME', use_bias=True, kernel_initializer=PARAMS['KernelInit'])
def OneConv(layer,FilterNum,FilterSize,training):
activate=tf.nn.relu(layer)
norm=tf.layers.batch_normalization(activate,axis=-1,training=training)
conv=Conv(FilterNum,norm,FilterSize)
return conv
def ConvBlock(BlockInput,name, FilterNum, training):
with tf.name_scope(name):
conv1=OneConv(BlockInput,FilterNum,PARAMS['FilterSize'],training)
conc1=tf.concat([BlockInput,conv1],axis=-1)
conv2=OneConv(conc1,FilterNum,PARAMS['FilterSize'],training)
conc2=tf.concat([BlockInput,conv1,conv2],axis=-1)
BlockOut=OneConv(conc2,FilterNum,1,training)
return BlockOut
我用来构建Forward函数。每当我尝试使用以下方法测试网络时:
X=tf.Variable(np.random.randn(1,128,128,1),dtype=tf.float32)
init=tf.global_variables_initializer()
test=Forward(X)
with tf.Session() as sess:
init.run()
print(test.eval())
我得到一个错误:
FailedPreconditionError(请参阅上面的回溯):尝试使用未初始化的值BatchNorm_12 / beta [[节点BatchNorm_12 / beta / read(在/home/riccardo/.anaconda3/envs/Tenso/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/variables.py:277中定义) IdentityT = DT_FLOAT,_device =“ / job:localhost /副本:0 / task:0 / device:GPU:0”]] [[{{node ConvBlockUp1_1 / conv2d_2 / BiasAdd / _7}} = _Recvclient_terminated = false,recv_device =“ / job:localhost /副本:0 / task:0 / device:CPU:0”,send_device =“ / job:localhost /副本:0 /任务:0 /设备:GPU:0“,send_device_incarnation = 1,tensor_name =” edge_520_ConvBlockUp1_1 / conv2d_2 / BiasAdd“,tensor_type = DT_FLOAT,_device =” / job:本地主机/副本:0 /任务:0 /设备:CPU:0“]]
我不太清楚为什么全局初始化程序无法初始化批处理规范层,我还尝试将用于batchnorm参数的特定初始化程序传递给tf.layers.batch_normalization,但它没有进行任何更改。知道我缺少什么吗?
答案 0 :(得分:1)
创建模型后,需要创建变量初始化程序 。试试:
X=tf.Variable(np.random.randn(1,128,128,1),dtype=tf.float32)
test=Forward(X)
init=tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(test.eval())