pyTorch LSTM中的准确度得分

时间:2017-05-14 09:57:10

标签: python scikit-learn deep-learning pytorch

我一直在this LSTM tutorial

上运行wikigold.conll NER data set

training_data包含序列和标签元组的列表,例如:

training_data = [
    ("They also have a song called \" wake up \"".split(), ["O", "O", "O", "O", "O", "O", "I-MISC", "I-MISC", "I-MISC", "I-MISC"]),
    ("Major General John C. Scheidt Jr.".split(), ["O", "O", "I-PER", "I-PER", "I-PER"])
]

我写下了这个功能

def predict(indices):
    """Gets a list of indices of training_data, and returns a list of predicted lists of tags"""
    for index in indicies:
        inputs = prepare_sequence(training_data[index][0], word_to_ix)
        tag_scores = model(inputs)
        values, target = torch.max(tag_scores, 1)
        yield target

这样我就可以获得训练数据中特定指数的预测标签。

但是,如何评估所有训练数据的准确度分数。

准确性是指在所有句子中正确分类的单词数除以单词计数。

这就是我提出的,这是非常缓慢和丑陋的:

y_pred = list(predict([s for s, t in training_data]))
y_true = [t for s, t in training_data]
c=0
s=0
for i in range(len(training_data)):
    n = len(y_true[i])
    #super ugly and ineffiicient
    s+=(sum(sum(list(y_true[i].view(-1, n) == y_pred[i].view(-1, n).data))))
    c+=n

print ('Training accuracy:{a}'.format(a=float(s)/c))

如何在pytorch中有效地完成这项工作?

P.S: 我一直试图使用sklearn's accuracy_score失败

2 个答案:

答案 0 :(得分:4)

我会使用numpy以便不在纯python中迭代列表。

结果相同,但运行速度更快

def accuracy_score(y_true, y_pred):
    y_pred = np.concatenate(tuple(y_pred))
    y_true = np.concatenate(tuple([[t for t in y] for y in y_true])).reshape(y_pred.shape)
    return (y_true == y_pred).sum() / float(len(y_true))

这是如何使用它:

#original code:
y_pred = list(predict([s for s, t in training_data]))
y_true = [t for s, t in training_data]
#numpy accuracy score
print(accuracy_score(y_true, y_pred))

答案 1 :(得分:0)

您可以像这样使用sklearn's accuracy_score

values, target = torch.max(tag_scores, -1)
accuracy = accuracy_score(train_y, target)
print("\nTraining accuracy is %d%%" % (accuracy*100))