混淆矩阵获得精度,召回率,f1score

时间:2019-10-16 02:15:29

标签: python-3.x dataframe pyspark pyspark-sql

我有一个数据框df。我已经在数据帧上执行了DecisionTree分类算法。两列分别是标签和执行算法时的功能。该模型称为dtc。如何在pyspark中创建混淆矩阵?

dtc = DecisionTreeClassifier(featuresCol = 'features', labelCol = 'label')
dtcModel = dtc.fit(train)
predictions = dtcModel.transform(test)
from pyspark.mllib.linalg import Vectors
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.evaluation import MulticlassMetrics

preds = df.select(['label', 'features']) \
                            .df.map(lambda line: (line[1], line[0]))
metrics = MulticlassMetrics(preds)

    # Confusion Matrix
print(metrics.confusionMatrix().toArray())```

2 个答案:

答案 0 :(得分:0)

在调用metrics.confusionMatrix().toArray()之前,您需要转换为rdd并映射到元组。

official documentation

  

pyspark.mllib.evaluation.MulticlassMetrics(predictionAndLabels)类[源代码]

     

用于多类分类的评估器。

     

参数:predictionAndLabels –对(预测,标签)对的RDD。

以下是指导您的示例。

机器学习部分

import pyspark.sql.functions as F
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.mllib.evaluation import MulticlassMetrics
from pyspark.sql.types import FloatType
#Note the differences between ml and mllib, they are two different libraries.

#create a sample data frame
data = [(1.54,3.45,2.56,0),(9.39,8.31,1.34,0),(1.25,3.31,9.87,1),(9.35,5.67,2.49,2),\
        (1.23,4.67,8.91,1),(3.56,9.08,7.45,2),(6.43,2.23,1.19,1),(7.89,5.32,9.08,2)]

cols = ('a','b','c','d')

df = spark.createDataFrame(data, cols)

assembler = VectorAssembler(inputCols=['a','b','c'], outputCol='features')

df_features = assembler.transform(df)

#df.show()

train_data, test_data = df_features.randomSplit([0.6,0.4])

dtc = DecisionTreeClassifier(featuresCol='features',labelCol='d')

dtcModel = dtc.fit(train_data)

predictions = dtcModel.transform(test_data)

评估部分

#important: need to cast to float type, and order by prediction, else it won't work
preds_and_labels = predictions.select(['predictions','d']).withColumn('label', F.col('d').cast(FloatType())).orderBy('prediction')

#select only prediction and label columns
preds_and_labels = preds_and_labels.select(['prediction','label'])

metrics = MultiClassMetrics(preds_and_labels.rdd.map(tuple))

#print(metrics.ConfusionMatrix().toArray())

答案 1 :(得分:0)

使用此:

import sklearn 
from pyspark.ml.classification import RandomForestClassifier

rf = RandomForestClassifier(featuresCol = 'features', labelCol = 'label', numTrees=500)
rfModel = rf.fit(train)
predictions_train = rfModel.transform(train)

y_true = predictions_train.select(['label']).collect()
y_pred = predictions_train.select(['prediction']).collect()

from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(y_true, y_pred))

其中train是您的训练数据。