pyspark管道模型的模型解释

时间:2016-05-04 08:08:16

标签: apache-spark pyspark decision-tree apache-spark-mllib

我正在使用Pipeline模块在pyspark中实现DecisionTreeClassifier,因为我在我的数据集上执行了几个功能工程步骤。 代码类似于Spark文档中的示例:

from pyspark import SparkContext, SQLContext
from pyspark.ml import Pipeline
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import StringIndexer, VectorIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator

# Load the data stored in LIBSVM format as a DataFrame.
data = sqlContext.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")

# Index labels, adding metadata to the label column.
# Fit on whole dataset to include all labels in index.
labelIndexer = StringIndexer(inputCol="label", outputCol="indexedLabel").fit(data)
# Automatically identify categorical features, and index them.
# We specify maxCategories so features with > 4 distinct values are treated as continuous.
featureIndexer =\
    VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=4).fit(data)

# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3])

# Train a DecisionTree model.
dt = DecisionTreeClassifier(labelCol="indexedLabel", featuresCol="indexedFeatures")

# Chain indexers and tree in a Pipeline
pipeline = Pipeline(stages=[labelIndexer, featureIndexer, dt])

# Train model.  This also runs the indexers.
model = pipeline.fit(trainingData)

# Make predictions.
predictions = model.transform(testData)

# Select example rows to display.
predictions.select("prediction", "indexedLabel", "features").show(5)

# Select (prediction, true label) and compute test error
evaluator = MulticlassClassificationEvaluator(
    labelCol="indexedLabel", predictionCol="prediction", metricName="precision")
accuracy = evaluator.evaluate(predictions)
print("Test Error = %g " % (1.0 - accuracy))

treeModel = model.stages[2]
# summary only
print(treeModel)

问题是如何对此进行模型解释?管道模型对象没有 toDebugString()方法,类似于 DecisionTree.trainClassifier 类中的方法 我不能在我的管道中使用 DecisionTree.trainClassifier ,因为trainclassifier()将训练数据作为参数。

管道接受训练数据作为 fit()方法中的参数,而 transform()接受测试数据

有没有办法使用管道并仍然执行模型解释&找到属性重要性?

1 个答案:

答案 0 :(得分:1)

是的,我在pyspark的几乎所有模型解释中都使用了以下方法。下面的代码使用代码摘录中的命名约定。

dtm = model.stages[-1] # you estimator is the last stage in the pipeline
# hence the DecisionTreeClassifierModel will be the last transformer in the PipelineModel object 
dtm.explainParams()

现在您可以访问DecisionTreeClassifierModel的所有方法。可以找到所有可用的方法和属性here。代码未在您的示例中进行测试。