我想在训练之前加载预训练词嵌入,而不是在每个train_steps都加载它。我遵循此post中的步骤。 但这会显示错误:
您必须使用dtype float和shape [2000002,300]输入占位符张量'word_embedding_placeholder'的值
以下是大致代码:
embeddings_var = tf.Variable(tf.random_uniform([vocabulary_size, embedding_dim], -1.0, 1.0), trainable=False)
embedding_placeholder = tf.placeholder(tf.float32, [vocabulary_size, embedding_dim], name='word_embedding_placeholder')
embedding_init = embeddings_var.assign(embedding_placeholder) # assign exist word embeddings
batch_embedded = tf.nn.embedding_lookup(embedding_init, batch_ph)
sess = tf.Session()
train_steps = round(len(X_train) / BATCH_SIZE)
train_iterator, train_next_element = get_dataset_iterator(X_train, y_train, BATCH_SIZE, training_epochs)
sess.run(init_g)
sess.run(train_iterator.initializer)
_ = sess.run(embedding_init, feed_dict={embedding_placeholder: w2v})
for epoch in range(0, training_epochs):
# Training steps
for i in range(train_steps):
X_train_input, y_train_input = sess.run(train_next_element)
seq_len = np.array([list(word_idx).index(PADDING_INDEX) if PADDING_INDEX in word_idx else len(word_idx) for word_idx in X_train_input]) # actual lengths of sequences
train_loss, train_acc, _ = sess.run([loss, accuracy, optimizer],
feed_dict={batch_ph: X_train_input,
target_ph: y_train_input,
seq_len_ph: seq_len,
keep_prob_ph: KEEP_PROB})
当我在训练中将feed_dict更改为:
train_loss, train_acc, _ = sess.run([loss, accuracy, optimizer],
feed_dict={batch_ph: X_train_input,
target_ph: y_train_input,
seq_len_ph: seq_len,
keep_prob_ph: KEEP_PROB,
embedding_placeholder: w2v})
它可以工作,但是并不优雅。有人遇到这个问题吗?
目标:我想在训练之前仅加载一次训练前嵌入。而不是每次都重新计算embedding_init。
答案 0 :(得分:0)
大概您在网络中的某个地方使用了batch_embedded,这意味着您在丢失时使用了它。这意味着,无论何时在循环内对丢失执行sess.run时,都将重新计算batch_embedded,从而重新计算embedding_init,为此需要embedding_placeholder。相反,您可以按如下所示初始化变量:
embeddings_var = tf.get_variable("embeddings_var", shape=[vocabulary_size, embedding_dim], initializer=tf.constant_initializer(w2v), trainable=False)