TensorFlow sparse_softmax_cross_entropy排名错误

时间:2017-11-03 21:49:08

标签: tensorflow

我正在尝试在TensorFlow上使用LSTM构建RNN。输入和输出都是5000乘2矩阵,其中列代表特征。然后将这些矩阵馈送到batchX和batchY占位符,从而实现反向传播。代码的主要定义位于底部。我收到以下错误:

  

“排名不匹配:标签排名(收到2)应该等于logits排名减1(收到2)。”

我检查了logits_serieslabels_series,它们似乎都包含[batch_size, num_features]

形状的反向传播量的张量

我感到困惑的是:由于logits是标签的预测,它们是否应该具有相同的尺寸?

'''
RNN definitions

input_dimensions = [batch_size, truncated_backprop_length, num_features_input] 
output_dimensions = [batch_size, truncated_backprop_length, num_features_output]
state_dimensions = [batch_size, state_size]
'''
batchX_placeholder = tf.placeholder(tf.float32, (batch_size, truncated_backprop_length, num_features_input))
batchY_placeholder = tf.placeholder(tf.int32, (batch_size, truncated_backprop_length, num_features_output))
init_state = tf.placeholder(tf.float32, (batch_size, state_size))
inputs_series = tf.unstack(batchX_placeholder, axis=1)
labels_series = tf.unstack(batchY_placeholder, axis=1)

w = tf.Variable(np.random.rand(num_features_input+state_size,state_size), dtype = tf.float32)
b = tf.Variable(np.zeros((batch_size, state_size)), dtype = tf.float32)
w2 = tf.Variable(np.random.rand(state_size, num_features_output), dtype = tf.float32)
b2 = tf.Variable(np.zeros((batch_size, num_features_output)), dtype=tf.float32)

#calculate state and output variables

state_series = []
output_series = []
current_state = init_state
#iterate over each truncated_backprop_length
for current_input in inputs_series:
    current_input = tf.reshape(current_input,[batch_size, num_features_input])
    input_and_state_concatenated = tf.concat([current_input,current_state], 1)
    next_state = tf.tanh(tf.matmul(input_and_state_concatenated, w) + b)
    state_series.append(next_state)
    current_state = next_state
    output = tf.matmul(current_state, w2)+b2
    output_series.append(output)

#calculate expected output for each state    
logits_series = [tf.matmul(state, w2) + b2 for state in state_series] 
#print(logits_series)
predictions_series = [tf.nn.softmax(logits) for logits in logits_series]
'''
batchY_placeholder = np.zeros((batch_size,truncated_backprop_length))
for i in range(batch_size):
    for j in range(truncated_backprop_length):
        batchY_placeholder[i,j] = batchY1_placeholder[j, i, 0]+batchY1_placeholder[j, i, 1]
'''
print("logits_series", logits_series)
print("labels_series", labels_series)
#calculate losses given each actual and calculated output
losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logits = logits, labels = labels) for logits, labels in zip(logits_series,labels_series)]
total_loss = tf.reduce_mean(losses)

1 个答案:

答案 0 :(得分:2)

感谢Maosi Chen,我发现了这个问题。这是因为

  

tf.nn.sparse_softmax_cross_entropy_with_logits

要求标签的维度少于logits。具体来说,labels参数采用形状[batch_size] and the dtype int32 or int64

的值

我通过枚举一个热门编码标签解决了这个问题,减少了维度

但是,也可以使用

  

tf.nn.softmax_cross_entropy_with_logits

其中没有降维要求,因为它采用形状[batch_size, num_classes] and dtype float32 or float64.

的标签值