我有以下功能:
def forward_propagation(self, x):
# The total number of time steps
T = len(x)
# During forward propagation we save all hidden states in s because need them later.
# We add one additional element for the initial hidden, which we set to 0
s = tf.zeros([T+1, self.hidden_dim])
# The outputs at each time step. Again, we save them for later.
o = tf.zeros([T, self.word_dim])
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
c = tf.placeholder(tf.float32)
s_t = tf.nn.tanh(a + tf.reduce_sum(tf.multiply(b, c)))
o_t = tf.nn.softmax(tf.reduce_sum(tf.multiply(a, b)))
# For each time step...
with tf.Session() as sess:
s = sess.run(s)
o = sess.run(o)
for t in range(T):
# Note that we are indexing U by x[t]. This is the same as multiplying U with a one-hot vector.
s[t] = sess.run(s_t, feed_dict={a: self.U[:, x[t]], b: self.W, c: s[t-1]})
o[t] = sess.run(o_t, feed_dict={a: self.V, b: s[t]})
return [o, s]
self.U,self.V和self.W是numpy数组。我试着在
上获得softmaxo_t = tf.nn.softmax(tf.reduce_sum(tf.multiply(a, b)))
图表,它在这一行上给出了错误:
o[t] = sess.run(o_t, feed_dict={a: self.V, b: s[t]})
错误是:
InvalidArgumentError(参见上面的回溯):预期开始[0] == 0 (got -1)和size [0] == 0(得1)当input.dim_size(0)== 0
[[Node:Slice = Slice [Index = DT_INT32,T = DT_INT32, _device =“/ job:localhost / replica:0 / task:0 / cpu:0”](Shape_1,Slice / begin,Slice / size)]]
我应该如何在tensorflow中获得softmax?
答案 0 :(得分:2)
问题出现是因为您在tf.reduce_sum
的参数上调用了tf.nn.softmax
。因此,softmax函数失败,因为标量不是有效的输入参数。您的意思是使用tf.matmul
而不是tf.reduce_sum
和tf.multiply
的组合吗?
编辑:Tensorflow不提供开箱即用的等效np.dot
。如果要计算矩阵和向量的点积,则需要明确地对索引求和:
# equivalent to np.dot(a, b) if a.ndim == 2 and b.ndim == 1
c = tf.reduce_sum(a * b, axis=1)