我希望修改http://www.brightideasinanalytics.com/rnn-pretrained-word-vectors/处的代码,即预测下一个单词的代码,以便预测问题答案的代码。
以下是我遇到问题的代码的摘录:
import tensorflow.contrib as ct
def NHIDDEN():
return 1
g = tf.Graph()
tf.reset_default_graph()
with g.as_default():
# lines 97-104 of original code
# RNN output node weights and biases
weights = { 'out': tf.Variable(tf.random_normal([NHIDDEN(), embedding_dim])) }
biases = { 'out': tf.Variable(tf.random_normal([embedding_dim])) }
with tf.name_scope("embedding"):
W = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]),
trainable=False, name="W")
embedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim])
embedding_init = W.assign(embedding_placeholder)
preimage = tf.nn.embedding_lookup(W, x2)
# lines 107-119 of original
# reshape input data
x_unstack = tf.unstack(preimage)
# create RNN cells
rnn_cell = ct.rnn.MultiRNNCell([ct.rnn.BasicLSTMCell(NHIDDEN()), ct.rnn.BasicLSTMCell(NHIDDEN())])
outputs, states = ct.rnn.static_rnn(rnn_cell, x_unstack, dtype=tf.float32)
# capture only the last output
pred = tf.matmul(outputs[-1], weights['out']) + biases['out']
# Create loss function and optimizer
cost = tf.reduce_mean(tf.nn.l2_loss(pred-y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
# lines 130, 134 and 135 of original
step = 0
acc_total = 0
loss_total = 0
with tf.Session(graph = g) as sess:
# lines 138, 160, 162, 175, 178 and 182 of original
while step < 1: # training_iters:
_,loss, pred_ = sess.run([optimizer, cost, pred], feed_dict =
{x: tf.nn.embedding_lookup(W, x2), y: tf.nn.embedding_lookup(W, y)})
loss_total += loss
print("loss = " + "{:.6f}".format(loss_total))
step += 1
print ("Finished Optimization")
我得到的错误是:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-7a72d8d4f100> in <module>()
42 while step < 1: # training_iters:
43 _,loss, pred_ = sess.run([optimizer, cost, pred], feed_dict =
---> 44 {x: tf.nn.embedding_lookup(W, x2), y: tf.nn.embedding_lookup(W, y)})
45 loss_total += loss
46 print("loss = " + "{:.6f}".format(loss_total))
TypeError: unhashable type: 'numpy.ndarray'
如何修复代码?是因为unstack
ing?
附加上下文:x2
和y
被赋予np.array(list(vocab_processor.transform([s])))
的返回值,其中s
是一个字符串(通过传递不同的字符串)。使用https://ireneli.eu/2017/01/17/tensorflow-07-word-embeddings-2-loading-pre-trained-vectors/处的代码计算embedding_dim
,vocab_size
和W
。
答案 0 :(得分:0)
此处出现问题:y: tf.nn.embedding_lookup(W, y)
。 feed_dict
键应该是TensorFlow图中的占位符。假设y
是包含目标值的numpy.ndarray
,您可以定义tf.placeholder y_
以将目标值提供给网络,从而更改{{1}的相应条目转到feed_dict
并相应地修改其他张量(即使用张量y_: tf.nn.embedding_lookup(W, y)
来计算损失)。