Tensorflow LSTM错误(ValueError:形状必须等于秩,但必须为2和1)

时间:2019-02-21 11:52:19

标签: python tensorflow deep-learning lstm recurrent-neural-network

我知道这个问题已经问过很多次了,但是我对tensorflow还是陌生的,以前的线程都无法解决我的问题。我正在尝试针对一系列传感器数据实施LSTM,以对数据进行分类。我希望将数据分类为0或1,因此它是一个二进制分类器。我共有2539个样本,每个样本都有555个time_step,每个time_step包含9个特征,因此我的输入具有形状(2539、555、9),对于每个样本,我都有一个标签数组,其中包含值0或1, shape就像这样(2539,1),eack coloumn的值为0或1。我在下面准备了这段代码,但是我发现有关logit和标签尺寸的错误。无论我如何重塑它们,我仍然会出错。您能帮我了解一下问题吗?

 X_train,X_test,y_train,y_test = train_test_split(final_training_set, labels, test_size=0.2, shuffle=False, random_state=42)


epochs = 10
time_steps = 555
n_classes = 2
n_units = 128
n_features = 9
batch_size = 8

x= tf.placeholder('float32',[batch_size,time_steps,n_features])
y = tf.placeholder('float32',[None,n_classes])

###########################################
out_weights=tf.Variable(tf.random_normal([n_units,n_classes]))
out_bias=tf.Variable(tf.random_normal([n_classes]))
###########################################

lstm_layer=tf.nn.rnn_cell.LSTMCell(n_units,state_is_tuple=True)
initial_state = lstm_layer.zero_state(batch_size, dtype=tf.float32)
outputs,states = tf.nn.dynamic_rnn(lstm_layer, x,
                                   initial_state=initial_state,
                                   dtype=tf.float32)


###########################################
output=tf.matmul(outputs[-1],out_weights)+out_bias
print(np.shape(output))

logit = output
logit = (logit, [-1])

cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logit, labels=labels))
optimizer = tf.train.AdamOptimizer().minimize(cost)
with tf.Session() as sess:

        tf.global_variables_initializer().run()
        tf.local_variables_initializer().run()

        for epoch in range(epochs):
            epoch_loss = 0

            i = 0
            for i in range(int(len(X_train) / batch_size)):

                start = i
                end = i + batch_size

                batch_x = np.array(X_train[start:end])
                batch_y = np.array(y_train[start:end])

                _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})

                epoch_loss += c

                i += batch_size

            print('Epoch', epoch, 'completed out of', epochs, 'loss:', epoch_loss)

        pred = tf.round(tf.nn.sigmoid(logit)).eval({x: np.array(X_test), y: np.array(y_test)})

        f1 = f1_score(np.array(y_test), pred, average='macro')

        accuracy=accuracy_score(np.array(y_test), pred)


        print("F1 Score:", f1)
        print("Accuracy Score:",accuracy)

这是错误: ValueError:形状必须等于等级,但必须为2和1(将形状0与其他形状合并)。输入形状为[555,2],[1]的“ logistic_loss / logits”(op:“ Pack”)。

1 个答案:

答案 0 :(得分:0)

只是一个更新,问题出在标签的形状上。在为标签添加了onehot编码并解决了二维问题之后。