加入两个Spark mllib管道

时间:2017-06-15 14:27:28

标签: python scala apache-spark apache-spark-mllib apache-spark-ml

我有两个单独的DataFrames,每个都有几个不同的处理阶段,我在管道中使用mllib变换器来处理。

我现在想要将这两个管道连接在一起,保留每个DataFrame的功能(列)。

Scikit-learn有FeatureUnion课程来处理这个问题,我似乎无法找到mllib的等效内容。

我可以在一个管道的末尾添加一个自定义变换器阶段,该管道将另一个管道生成的DataFrame作为属性并将其连接到transform方法中,但这看起来很混乱。

1 个答案:

答案 0 :(得分:6)

PipelinePipelineModel有效PipelineStages,因此可以合并为一个Pipeline。例如:

from pyspark.ml import Pipeline
from pyspark.ml.feature import VectorAssembler

df = spark.createDataFrame([
    (1.0, 0, 1, 1, 0),
    (0.0, 1, 0, 0, 1)
], ("label", "x1", "x2", "x3", "x4"))

pipeline1 = Pipeline(stages=[
    VectorAssembler(inputCols=["x1", "x2"], outputCol="features1")
])

pipeline2 = Pipeline(stages=[
    VectorAssembler(inputCols=["x3", "x4"], outputCol="features2")
])

您可以合并Pipelines

Pipeline(stages=[
    pipeline1, pipeline2, 
    VectorAssembler(inputCols=["features1", "features2"], outputCol="features")
]).fit(df).transform(df)
+-----+---+---+---+---+---------+---------+-----------------+
|label|x1 |x2 |x3 |x4 |features1|features2|features         |
+-----+---+---+---+---+---------+---------+-----------------+
|1.0  |0  |1  |1  |0  |[0.0,1.0]|[1.0,0.0]|[0.0,1.0,1.0,0.0]|
|0.0  |1  |0  |0  |1  |[1.0,0.0]|[0.0,1.0]|[1.0,0.0,0.0,1.0]|
+-----+---+---+---+---+---------+---------+-----------------+

或预装PipelineModels

model1 = pipeline1.fit(df)
model2 = pipeline2.fit(df)

Pipeline(stages=[
    model1, model2, 
    VectorAssembler(inputCols=["features1", "features2"], outputCol="features")
]).fit(df).transform(df)
+-----+---+---+---+---+---------+---------+-----------------+
|label| x1| x2| x3| x4|features1|features2|         features|
+-----+---+---+---+---+---------+---------+-----------------+
|  1.0|  0|  1|  1|  0|[0.0,1.0]|[1.0,0.0]|[0.0,1.0,1.0,0.0]|
|  0.0|  1|  0|  0|  1|[1.0,0.0]|[0.0,1.0]|[1.0,0.0,0.0,1.0]|
+-----+---+---+---+---+---------+---------+-----------------+

所以我建议的方法是事先加入数据,fittransform整个DataFrame

另见:

  • {{3}}