Tensorflow中LSTM的多对一体系结构

时间:2019-03-14 17:25:38

标签: python tensorflow lstm

我想做一个多对一个LSTM模型,但是我很难将数据转换成两个输入。总共有150个样本具有8个特征(A到H列),第一列是标签。

数据样本:

A    B    C    D     E     F   G   H   I
0   45    0    0     0    10   0   0   1
0   80    0   0.25   0    55   0   0   0
0  100    0    0     0    10   0   0   1
0   25    0    0     0   250   0   0   1
0   45    0   0.09   0    10   0   0   0

我希望我的第一个输入是前4列(A到D)i.e [0,45,0,0]的序列,第二个输入是其他4列(E到H)i.e [0,10,0,0]

我正在尝试遵循this的示例,但是我对数据的外观感到困惑。 seq_max_len = 2input_dim = 4应该吗?

编辑:

所以我尝试使用seq_max_len = 2input_dim = 4seq_max_len = 4input_dim = 1,但出现错误:

  

ValueError:lstm_cell_1层的输入0与   层:预期ndim = 2,找到的ndim = 3。收到的完整图形:[无,1,   1]

代码:

tf.reset_default_graph()

# Data Dimensions
input_dim = 1           # input dimension
seq_max_len = 4         # sequence maximum length
n_classes = 1  

# Parameters
n_iterations = 100      # Total number of training steps
batch_size = 10         # batch size
n_hidden = 10 


xplaceholder= tf.placeholder('float',[None,seq_max_len,input_dim ])
yplaceholder = tf.placeholder('float', [None, n_classes])



def recurrent_neural_network_model():

    # giving the weights and biases random values
    layer ={ 'weights': tf.Variable(tf.random_normal([n_hidden, n_classes])),'bias': tf.Variable(tf.random_normal([n_classes]))}


    x = tf.split(xplaceholder, seq_max_len, 1) 

    lstm_cell = tf.nn.rnn_cell.LSTMCell(n_hidden)


    # outputs contains the output for each slice of the layer
    # sate contains the final values of the hidden state
    outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)


    output = tf.matmul(outputs[-1], layer['weights']) + layer['bias']

    return output

logit = recurrent_neural_network_model()
logit = tf.reshape(logit, [-1]) 

cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logit, labels=yplaceholder)) ## used because it is a binary classification (If this was a multi class classification we should use a cost function like ‘‘softmax_cross_entropy_with_logits’)
optimizer = tf.train.AdamOptimizer().minimize(cost)

with tf.Session() as sess:
    tf.global_variables_initializer().run()
    tf.local_variables_initializer().run()

    for step in range(n_iterations):
        step_loss = 0

        i = 0
        for i in range(int(len(X_train) / batch_size)):

            start = i
            end = i + batch_size

            batch_x = np.array(X_train[start:end])
            batch_y = np.array(y_train[start:end])

            _, c = sess.run([optimizer, cost], feed_dict={xplaceholder: batch_x, yplaceholder: batch_y})
            step_loss += c
            i += batch_size

        print('Step', step, 'completed out of', n_iterations, 'loss:', step_loss)

    pred = tf.round(tf.nn.sigmoid(logit)).eval({xplaceholder: np.array(X_test), yplaceholder: y_true})
    f1 = f1_score(y_true, pred, average='macro')
    accuracy=accuracy_score(y_true, pred)
    recall = recall_score(y_true, pred)
    precision = precision_score(y_true, pred)

    print("F1 Score:", f1)
    print("Accuracy Score:",accuracy)
    print("Recall:", recall)
    print("Precision:", precision)

感谢您的帮助。

0 个答案:

没有答案