火花中的VectorAssembler非常慢,即使是在琐碎的情况下

时间:2016-09-28 12:12:52

标签: performance scala apache-spark

我一直在使用Spark进行一些数据分析和机器学习。

读入一些数据为trainDF后,我构造了两个逻辑等效的管道,但其中一个管道最后有一个VectorAssembler(只有一个inputCols)来演示减速:

scala> val assembler = new VectorAssembler().setInputCols(Array("all_description_features")).setOutputCol("features")
assembler: org.apache.spark.ml.feature.VectorAssembler = vecAssembler_a76e6412bc96

scala> val idfDescription = new IDF().setInputCol("all_description_hashed").setOutputCol("all_description_features")
idfDescription: org.apache.spark.ml.feature.IDF = idf_4b504cf08d86

scala> val descriptionArray = Array(tokensDescription, removerDescription, hashingTFDescription, idfDescription, assembler, lr)
descriptionArray: Array[org.apache.spark.ml.PipelineStage with org.apache.spark.ml.util.DefaultParamsWritable{def copy(extra: org.apache.spark.ml.param.ParamMap): org.apache.spark.ml.PipelineStage with org.apache.spark.ml.util.DefaultParamsWritable{def copy(extra: org.apache.spark.ml.param.ParamMap): org.apache.spark.ml.PipelineStage with org.apache.spark.ml.util.DefaultParamsWritable{def copy(extra: org.apache.spark.ml.param.ParamMap): org.apache.spark.ml.PipelineStage with org.apache.spark.ml.util.DefaultParamsWritable}}}] = Array(regexTok_316674b9209b, stopWords_8ecdf6f09955, hashingTF_48cf3f9cc065, idf_4b504cf08d86, vecAssembler_a76e6412bc96, logreg_f0763c33b304)

scala> val pipeline = new Pipeline().setStages(descriptionArray)
pipeline: org.apache.spark.ml.Pipeline = pipeline_4e462d0ee649

scala> time {pipeline.fit(trainDF)}
16/09/28 13:04:17 WARN Executor: 1 block locks were not released by TID = 9526:
[rdd_38_0]
Elapsed time: 62370646425ns
res94: org.apache.spark.ml.PipelineModel = pipeline_4e462d0ee649

scala> val idfDescription = new IDF().setInputCol("all_description_hashed").setOutputCol("features")
idfDescription: org.apache.spark.ml.feature.IDF = idf_264569f76b23

scala> val descriptionArray = Array(tokensDescription, removerDescription, hashingTFDescription, idfDescription, lr)
descriptionArray: Array[org.apache.spark.ml.PipelineStage with org.apache.spark.ml.util.DefaultParamsWritable{def copy(extra: org.apache.spark.ml.param.ParamMap): org.apache.spark.ml.PipelineStage with org.apache.spark.ml.util.DefaultParamsWritable{def copy(extra: org.apache.spark.ml.param.ParamMap): org.apache.spark.ml.PipelineStage with org.apache.spark.ml.util.DefaultParamsWritable{def copy(extra: org.apache.spark.ml.param.ParamMap): org.apache.spark.ml.PipelineStage with org.apache.spark.ml.util.DefaultParamsWritable}}}] = Array(regexTok_316674b9209b, stopWords_8ecdf6f09955, hashingTF_48cf3f9cc065, idf_264569f76b23, logreg_f0763c33b304)

scala> val pipeline = new Pipeline().setStages(descriptionArray)
pipeline: org.apache.spark.ml.Pipeline = pipeline_758ec8aa3228

scala> time {pipeline.fit(trainDF)}
Elapsed time: 11092968167ns
res95: org.apache.spark.ml.PipelineModel = pipeline_758ec8aa3228

正如您所看到的,带有额外VectorAssembler的pipeline.fit要慢得多。这是一个玩具示例,但我正在使用的实际示例将受益于VectorAssembler(在这种情况下使用一个没有意义)并且会受到类似的性能影响。

只是想知道这是否是预期的,或者我是否使用了这个错误。我还注意到,使用VectorAssembler,我得到关于锁定未被释放的警告消息,这可能是相关的吗?

感谢您提供任何帮助和指导!

更新#1

进一步分析表明,所需的额外时间是logisticRegression拟合步骤,而不是实际的功能组合。令人费解的是,为什么这会花费更长的时间,因为它在两种情况下都采用的数据是相同的(我已经通过在将两个数据集传递到fit函数之前连接两个数据集并且检查两个特征列匹配来证明这一点对于所有ids)。

更新#2

我注意到的另一件事是,如果我将两个数据集写入磁盘作为镶木地板(一个已通过VectorAssembler,另一个没有通过VectorAssembler),那么通过VectorAssembler的那个数据集的大小是10倍大小,即使它们具有看似相同的架构,行数和数据。

更新#3

好的 - 所以我想我可以看到发生了什么。虽然带/不带VectorAssembler的数据是相同的,但是对我的数据上的VectorAssembler调用变换的行为用大量(在我的情况下有些无用)元数据来装饰它。这会导致磁盘大小膨胀,并且由于必须处理这些额外数据,也可能导致回归慢得多。

0 个答案:

没有答案