在pyspark.ml.pipeline

时间:2018-04-11 15:28:29

标签: python apache-spark pyspark spark-dataframe pipeline

我是Spark ML的新手。我正在尝试使用Spark ML Pipelines来链接数据转换(将其视为ETL过程)。换句话说,我想输入一个DataFrame,进行一系列转换(每次向该数据帧添加一列)并输出转换后的DataFrame。

我正在研究Python中管道的文档和代码,但我没有得到如何从管道中获取转换后的数据集。请参阅以下示例(从文档中复制并修改):

from pyspark.ml import Pipeline
from pyspark.ml.feature import HashingTF, Tokenizer

# Prepare training documents from a list of (id, text, label) tuples.
training = spark.createDataFrame([
   (0, "a b c d e spark", 1.0),
   (1, "b d", 0.0),
   (2, "spark f g h", 1.0),
   (3, "hadoop mapreduce", 0.0)
    ], ["id", "text", "label"])

 # Configure an ML pipeline, which consists of two stages: tokenizer, 
 hashingTF.
 tokenizer = Tokenizer(inputCol="text", outputCol="words")
 hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), 
 outputCol="features")
 pipeline = Pipeline(stages=[tokenizer, hashingTF])

 training.show()
 pipeline.fit(training)

如何从“管道”对象获取转换后的数据集(即在执行了标记生成器和散列后的数据集)?

1 个答案:

答案 0 :(得分:1)

你做不到。而是保留模型

model = pipeline.fit(training)

并将其用于transform数据:

training_transformed = model.transform(training)