在张量流1.4上。我收到了you need a value for placeholder tensor...
错误。事情是我我喂养这个张量像
feats = np.reshape(feats, (-1, var1, feat_dim, 1))
_, outlogits = sess.run([train_step, logits], feed_dict={inp_layer: feats,
targs: targets,
eta: 1e-4})
(通常我想重新塑造图形内部,但出于调试目的,我已经把它拿出来了)
占位符:
inp_layer = tf.placeholder(tf.float32, shape=[None, var1, feat_dim, 1])
错误说明:You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,66,200,1]
当我运行sess.run(tf.global_variables_initializer())
时会发生这个错误,所以它甚至没有开始考虑占位符的部分,但是却在抱怨它们?!
我认为这可能与我的一个图层大小依赖于占位符的事实有关..(虽然我有validate_shape=False
的权重)。将添加更多代码。
编辑:失败的示例代码,指出我认为问题来自哪里(记住代码在全局变量init上失败):
!edit2:YUP的问题就在那一行。那么问题就变成了如何得到一个图表,其中权重(以及输出)的维度是动态的。
train_feats = '..'
train_json = '..'
feat_dim = 200
var1 = 20
batch_size = 64
inp_layer = tf.placeholder(tf.float32, shape=[None, var1, feat_dim, 1])
targs = tf.placeholder(tf.int64, shape=[None])
eta = tf.placeholder(tf.float32)
chunk_size = 3
w1 = init_weight([chunk_size, feat_dim, 1, 32])
b1 = tf.zeros([32])
a1 = conv_layer(inp_layer, w1, b1, stride=3, padding='VALID')
chunk_size = tf.shape(a1)[1] <==== # ! IS THE PROBLEM !
w5 = init_weight([chunk_size, 1, 32, 12])
b5 = tf.zeros([12])
a5 = conv_layer(a1, w5, b5, stride=1, padding='VALID', act=False)
logits_ = tf.reshape(a5, [-1, 12])
softmax = tf.nn.softmax(logits_)
cross_ent = tf.reduce_sum(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=targs,
logits=logits_))
train_step = tf.train.AdamOptimizer(eta).minimize(cross_ent)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for feats, targets in batch_gen(train_feats, train_json, var1, feat_dim):
feats = np.reshape(feats, (var1, var1, feat_dim, 1))
sess.run(train_step, feed_dict={inp_layer: bla,
targs: targets,
eta: 1e-4})
def init_weight(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.01), validate_shape=False)
def conv_layer(x, w, b, stride, padding, act=True):
# striding over the features
if act:
return tf.nn.elu(tf.nn.conv2d(x, w, [1, stride, 1, 1], padding) + b)
else:
return tf.nn.conv2d(x, w, [1, stride, 1, 1], padding) + b
答案 0 :(得分:1)
该行
chunk_size = tf.shape(a1)[1]
tf.shape
提取a1
的运行时形状,而不是图定义时间已知的静态形状。由于a1
是inp_layer
和w1
之间卷积的结果,因此当您引用a1
时,您还需要解决inp_layer
。由于inp_layer
是占位符,因此您的错误就会出现。
由于您对在图定义时知道的a1
的第二维感兴趣,您可以使用:
chunk_size = a1.shape[1].value
提取正确的尺寸值。