Pyspark ML - 如何保存管道和RandomForestClassificationModel

时间:2017-07-08 00:36:50

标签: apache-spark pyspark apache-spark-mllib

我无法保存使用ml package of python / spark生成的随机森林模型。

>>> rf = RandomForestClassifier(labelCol="label", featuresCol="features")
>>> pipeline = Pipeline(stages=early_stages + [rf])
>>> model = pipeline.fit(trainingData)
>>> model.save("fittedpipeline")
  

回溯(最近一次呼叫最后一次):文件"",第1行,in    AttributeError:' PipelineModel'对象没有属性   '保存'

>>> rfModel = model.stages[8]
>>> print(rfModel)

具有20棵树的RandomForestClassificationModel(uid = rfc_46c07f6d7ac8)

>> rfModel.save("rfmodel")
  

回溯(最近一次呼叫最后一次):文件"",第1行,in    AttributeError:' RandomForestClassificationModel'对象有   没有属性'保存' **

也通过传递' sc'作为保存方法的第一个参数。

2 个答案:

答案 0 :(得分:3)

我相信您的代码存在的主要问题是您使用的是2.0.0之前的Apache Spark版本。因此,save API尚未提供Pipeline

以下是官方文档中的完整示例。让我们先创建我们的管道:

from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.feature import IndexToString, StringIndexer, VectorIndexer

# Load and parse the data file, converting it to a DataFrame.
data = spark.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")

# Index labels, adding metadata to the label column.
# Fit on whole dataset to include all labels in index.
labelIndexer = StringIndexer(inputCol="label", outputCol="indexedLabel")
labels = labelIndexer.fit(data).labels

# Automatically identify categorical features, and index them.
# Set maxCategories so features with > 4 distinct values are treated as continuous.
featureIndexer = VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=4)

early_stages = [labelIndexer, featureIndexer]

# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3])

# Train a RandomForest model.
rf = RandomForestClassifier(labelCol="indexedLabel", featuresCol="indexedFeatures", numTrees=10)

# Convert indexed labels back to original labels.
labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=labels)

# Chain indexers and forest in a Pipeline
pipeline = Pipeline(stages= early_stages + [rf, labelConverter])

# Train model.  This also runs the indexers.
model = pipeline.fit(trainingData)

您现在可以保存管道:

>>> model.save("/tmp/rf")
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

您还可以保存RF模型:

>>> rf_model = model.stages[2]
>>> print(rf_model)
RandomForestClassificationModel (uid=rfc_b368678f4122) with 10 trees
>>> rf_model.save("/tmp/rf_2")

答案 1 :(得分:1)

您可以保存管道和模型。在加载这些模型的情况下,您需要了解与每个模型对应的模型类型。例如:

from pyspark.sql import SparkSession
from pyspark.ml import Pipeline
from pyspark.ml.feature import VectorAssembler, VectorIndexer, OneHotEncoder, StringIndexer, OneHotEncoderEstimator
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator, CrossValidatorModel

df = *YOUR DATAFRAME*
categoricalColumns = ["A", "B", "C"]
stages = []

for categoricalCol in categoricalColumns:
    stringIndexer = StringIndexer(inputCol=categoricalCol, outputCol=categoricalCol + "Index")
    encoder = OneHotEncoderEstimator(inputCols=[stringIndexer.getOutputCol()], outputCols=[categoricalCol + "classVec"])
    stages += [stringIndexer, encoder]

label_stringIdx = StringIndexer(inputCol="id_imp", outputCol="label")
stages += [label_stringIdx]

assemblerInputs = [c + "classVec" for c in categoricalColumns]
assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features")
stages += [assembler]

pipeline = Pipeline(stages=stages)

pipelineModel = pipeline.fit(df)
pipelineModel.save("/path")

在前一种情况下,我保存了一个具有不同阶段的管道。 的 pipelineModel.save( “/路径”)

现在,如果你想使用它们:

pipelineModel = Pipeline.load("/path")
df = pipelineModel.transform(df)

您可以对其他情况执行相同的操作,例如:

cv = CrossValidator(estimator=lr, estimatorParamMaps=paramGrid, evaluator=evaluator, numFolds=2)

(trainingData, testData) = df.randomSplit([0.7, 0.3], seed=100)
cvModel = cv.fit(trainingData)
cvModel.save("/path")
cvM = CrossValidatorModel.load("/path")
predictions2 = cvM.transform(testData)

predictions = cvModel.transform(testData)

简而言之,如果要加载模型,则需要使用相应的对象。