计算NLTK和Scikit中两组关键字的精确度和召回率,用于不同大小的集合

时间:2016-06-03 18:45:09

标签: python-3.x scikit-learn nltk

我正在尝试计算两组关键字的精确度和召回率。 gold_standard有823个字词,test有1497个字词。

使用nltk.metrics版本的precisionrecall,我可以提供两套就好了。但是为Scikit做同样的事情却给我一个错误:

  

ValueError:找到样本数不一致的数组:[823 1497]

如何解决此问题?

#!/usr/bin/python3

from nltk.metrics import precision, recall
from sklearn.metrics import precision_score
from sys import argv
from time import time
import numpy
import csv

def readCSVFile(filename):
    termList = set()
    with open(filename, 'rt', encoding='utf-8') as f:
        reader = csv.reader(f)
        for row in reader:
            termList.update(row)
    return termList

def readDocuments(gs_file, fileToProcess):
    print("Reading CSV files...")
    gold_standard = readCSVFile(gs_file)
    test = readCSVFile(fileToProcess)
    print("All files successfully read!")
    return gold_standard, test

def calcPrecisionScipy(gs, test):
    gs = numpy.array(list(gs))
    test = numpy.array(list(test))
    print("Precision Scipy: ",precision_score(gs, test, average=None))

def process(datasest):
    print("Processing input...")
    gs, test = dataset
    print("Precision: ", precision(gs, test))
    calcPrecisionScipy(gs, test)

def usage():
    print("Usage: python3 generate_stats.py gold_standard.csv termlist_to_process.csv")

if __name__ == '__main__':
    if len(argv) != 3:
        usage()
        exit(-1)

    t0 = time()
    process(readDocuments(argv[1], argv[2]))
    print("Total runtime: %0.3fs" % (time() - t0))

我参考了以下页面进行编码:

=================================更新============= ======================

好的,所以我尝试将“非感性”数据添加到列表中以使它们具有相同的长度:

def calcPrecisionScipy(gs, test):
    if len(gs) < len(test):
        gs.update(list(range(len(test)-len(gs))))
    gs = numpy.array(list(gs))
    test = numpy.array(list(test))
    print("Precision Scipy: ",precision_score(gs, test, average=None))

现在我有另一个错误:

  

UndefinedMetricWarning:精确定义不明确,在没有预测样本的标签中设置为0.0。

1 个答案:

答案 0 :(得分:0)

科学上似乎无法计算两组不同长度的精确度或召回率。 我想nltk必须做的是将集合截断为相同的长度,您可以在脚本中执行相同的操作。

    import numpy as np
    import sklearn.metrics

    set1 = [True,True]
    set2 = [True,False,False]
    length = np.amin([len(set1),len(set2)])
    set1 = set1[:length]
    set2 = set2[:length]

    print sklearn.metrics.precision_score(set1,set2))