我运行了该tutorial的代码,但遇到了以下错误
我阅读了一些类似的帖子,但这并没有真正帮助我
ValueError:尺寸必须相等,但对于128和364 输入形状为[250,128],[364,256]的'RNN_forward / rnn / while / rnn / multi_rnn_cell / cell_0 / basic_lstm_cell / MatMul_1'(op:'MatMul')。
这是本教程结尾处的代码:
n_words = len(word_index)
embed_size = 300
batch_size = 250
lstm_size = 128
num_layers = 2
dropout = 0.5
learning_rate = 0.001
epochs = 100
multiple_fc = False
fc_units = 256
# Train the model with the desired tuning parameters# Train
for lstm_size in [64,128]:
for multiple_fc in [True, False]:
for fc_units in [128, 256]:
log_string = 'ru={},fcl={},fcu={}'.format(lstm_size,
multiple_fc,
fc_units)
model = build_rnn(n_words = n_words,
embed_size = embed_size,
batch_size = batch_size,
lstm_size = lstm_size,
num_layers = num_layers,
dropout = dropout,
learning_rate = learning_rate,
multiple_fc = multiple_fc,
fc_units = fc_units)
train(model, epochs, log_string)
我更改了应用分析的数据集,并尝试进行调整。 您是否知道如何解决该错误?
我读了一些类似的文章,但并没有真正帮助我。
非常感谢
答案 0 :(得分:1)
答案 1 :(得分:0)
由于post我解决了该问题,我替换了此代码:
with tf.name_scope('RNN_layers'):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
通过该代码:
with tf.name_scope('RNN_layers'):
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell(lstm_size, keep_prob) for _ in
range(num_layers)])
还通过添加以下功能:
def lstm_cell(lstm_size, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop