我正在尝试使用Tensorflow中的seq2seq.dynamic_decode来构建序列模型。我已经完成了编码器部分。
我对解码器感到困惑,因为[batch_size x sequence_length x embedding_size]
似乎返回[batch_size x sequence_length]
,但我需要实际的单词索引来正确计算我的损失rnn.BasicLSTMCell()
。
我想知道我的一个形状输入是不正确的还是我忘记了什么
解码器和编码器单元为# Variables
cell_size = 100
decoder_vocabulary_size = 7
batch_size = 2
decoder_max_sentence_len = 7
# Part of the encoder
_, encoder_state = tf.nn.dynamic_rnn(
cell=encoder_cell,
inputs=features,
sequence_length=encoder_sequence_lengths,
dtype=tf.float32)
# ---- END Encoder ---- #
# ---- Decoder ---- #
# decoder_sequence_lengths = _sequence_length(features)
embedding = tf.get_variable(
"decoder_embedding", [decoder_vocabulary_size, cell_size])
helper = seq2seq.GreedyEmbeddingHelper(
embedding=embedding,
start_tokens=tf.tile([GO_SYMBOL], [batch_size]),
end_token=END_SYMBOL)
decoder = seq2seq.BasicDecoder(
cell=decoder_cell,
helper=helper,
initial_state=encoder_state)
decoder_outputs, _ = seq2seq.dynamic_decode(
decoder=decoder,
output_time_major=False,
impute_finished=True,
maximum_iterations=self.decoder_max_sentence_len)
# I need labels (decoder_outputs) to be indices
losses = nn_ops.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
loss = tf.reduce_mean(losses)
。
function woocommerce_quantity_input($data) {
global $product;
$defaults = array(
'input_name' => $data['input_name'],
'input_value' => $data['input_value'],
'max_value' => apply_filters( 'woocommerce_quantity_input_max', '', $product ),
'min_value' => apply_filters( 'woocommerce_quantity_input_min', '', $product ),
'step' => apply_filters( 'woocommerce_quantity_input_step', '1', $product ),
'style' => apply_filters( 'woocommerce_quantity_style', 'float:left; margin-right:10px;', $product )
);
if ( ! empty( $defaults['min_value'] ) )
$min = $defaults['min_value'];
else $min = 1;
if ( ! empty( $defaults['max_value'] ) )
$max = $defaults['max_value'];
else $max = 20;
if ( ! empty( $defaults['step'] ) )
$step = $defaults['step'];
else $step = 1;
$options = '';
for ( $count = $min; $count <= $max; $count = $count+$step ) {
$selected = $count === $defaults['input_value'] ? ' selected' : '';
$options .= '<option value="' . $count . '"'.$selected.'>' . $count . '</option>';
}
echo '<div class="quantity_select" style="' . $defaults['style'] . '"><select name="' . esc_attr( $defaults['input_name'] ) . '" title="' . _x( 'Qty', 'Product quantity input tooltip', 'woocommerce' ) . '" class="qty">' . $options . '</select></div>';
}
答案 0 :(得分:3)
我发现解决方案是:
from tensorflow.python.layers.core import Dense
decoder = seq2seq.BasicDecoder(
cell=decoder_cell,
helper=helper,
initial_state=encoder_state,
output_layer=Dense(decoder_vocabulary_size))
...
logits = decoder_outputs[0]
您必须指定一个Dense图层,以便从cell_size投射到词汇量大小。